Information

What is a good source for info on disease frequency distribution among age groups?

What is a good source for info on disease frequency distribution among age groups?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I need information on the disease frequency distribution among age groups for an Android app I'm building… Hopefully I've come to the right place. Is there a good data source for this? Like a bioinformatics database?


You may need to study each disease individually or do a literature review. The Online Mendelian Inheritance in Man website may be a good starting point http://www.omim.org/ The Centers for Disease Control and Prevention lists several diseases with a variety of different statistics here: http://www.cdc.gov/DiseasesConditions/


A population is simply a group of people with some common characteristic, such as age, race, gender, or place of residence. A "target population" is a population for which you would like to make some conclusions. Examples:

  • residents of Mumbai
  • members of Blue Cross/Blue Shield (a U.S. health insurance organization)
  • postmenopausal women in Massachusetts
  • coal miners in Pennsylvania
  • male physicians in the United States
  • members of the BUSPH intramural softball team

Types of Epidemiological Studies | Essay | Epidemiology | Medical Science

Read this essay to learn about the two main types of epidemiological studies. The types are:- 1. Observational Epidemiological Studies 2. Experimental Epidemiological Studies.

Essay # 1. Observational Epidemiological Studies:

(i) Case control—Case Reference

(a) Descriptive Studies:

Descriptive epidemiology deals with the study of the magnitude of a disease or condition and its distribution or the magnitude of exposure to risk factors and other distribution within a human population with reference to person, place and time of occurrence.

In order to do this, the descriptive studies have to deal with:

(I) Description of the occurrence of a disease or condition or the exposure to risk factors:

1. What is a ‘case’ or what is a risk factor and what is exposure?

2. Who are the persons affected and how?

3. What is (are) the place(s) of occurrence of case or exposure?

4. What is the time of occurrence of case or exposure?

The points to be kept in mind in answering the above questions are:

1. ‘Case’ and ‘exposure’ must be defined in objective measurable terms.

2. Adequate relevant details about persons, namely Age, Sex, Marital Status, Occupation, Education, Socio-economic Status and others like Family size, Parity, Birth order, Maternal age, etc., as appropriate for the study should be collected.

3. Details on the place of occurrence (including details on migration), like urban/rural, regions within a Country or State, etc. needed.

4. The time interval or measurement of occurrence of cases or exposure should be clearly specified. Seasonal changes should be taken of occurrence of disease or exposure can be influenced by natural catastrophies, etc.

(II) Purpose of Descriptive Studies:

1. Estimate the prevalence or incidence of a disease or condition in a defined population group.

2. Describe the characteristics of people, with the disease or condition at a point in time or over a period of time.

3. Estimate the frequency and duration of exposure to risk of people.

4. Describe the characteristics of ‘exposed’ people at a point of time or over a period of time.

Descriptive studies can, therefore, be either cross-sectional (that is, data collected from a cross-section of the population at a point to time) or longitudinal (data collected at a number of chosen points of time) or can be in the nature of Surveillance Studies or continuous monitoring studies.

The results of such studies generally serve one or two purposes :

(a) To develop hypothesis to be tested subsequently through specially designed studies and

(b) To develop strategies for reducing or preventing exposure to strongly suspected risk factors.

Special surveys, including physical examina­tions, anthropometric measurements, etc., reveiw of records (provided they are complete and accurate) from specific groups (industrial labour, government servants, armed forces, etc.), vital statistics, etc. are the sources of data for Descriptive studies.

(i) Case-Control Studies:

The study of Causation and of prevention and Control of diseases or health problems can be looked upon as a four-stage process.Case-Control studies and Cohort Studies form the first two stages of this process and Experimental Studies and Practical Prevention or Control Programmes forms the third and fourth steps.It will be altered in some situations,it may not be possible or desirable to carry out Cohort Studies or Experimental Studies.

In the Case-control study, we proceed from an observed ‘effect’ (a disease or health condition) to identify possible causal factors. The nucleus of the study, therefore, is a series of cases of a disease or condition on whom information is obtained to find out the frequency of previous exposure to a suspected cause.

This is compared with the frequency of occurrence of the suspected cause in one or more groups of persons who do not have this condition or disease. In other words, the case-control study is a comparative study between a group of cases and one or more groups of ‘controls’.

It may, at this point, be helpful to look briefly at an example to understand the sequence of epidemiological research. A current problem of interest related to the role of dietary factors in disease is that of the relationship between dietary factors and Colorectal Cancer.

The epidemiological evidence that has accrued on this problem can be described as indicated below:

1. Identification of the problem

2. Identification of the potential role of diet as a risk factor

3. Testing the hypothesis of ‘on the role of diet as a risk factor’

4. Seek evidence to strengthen the hypothesis on the role of diet

5. Test the evidence of bile acids through an experimental study

6. Practical prevention trials

Descriptive Study of age and sex specific Colon cancer death rates in a population.

Comparative study of age and sex specific colon Cancer death rates in a predominantly meat eating population versus a predominantly vegetarian population.

Comparative study of Colon Cancer death rates in the meat eating and non-meat eating subgroups of populations within the predominantly meat eating population (case-control study).

Faecal analysis reveals higher concentrations of faecal bile acids in Colon Cancer cases than in Controls.

Experimental demonstration obtained that dietary manipulation in humans alters faecal bile acids.

In free-lining population, altered dietary in-take results in reduced incidence of Colon Cancer.

Since the Case-control study looks from ‘effect’ to ‘possible cause’, it is also often called a ‘Retrospective study’.

Need of Controls:

The need for a Control group will become obvious if we remind ourselves of the fact that any given cause (or group of causes) may neither be necessary nor sufficient for the causation of a disease. Not all meat-eaters develop Colon Cancer and some vegetarians also develop Colon Cancer.

Because of this lack of one to one correspondence between cause and effect in either direction, evidence on association between diseases and ‘causal factors’ can be obtained only by comparing the relative frequency of occurrence of the ‘causal factors’ in cases and controls.

Section of Cases:

Having defined a ‘Case’ unambiguously using a set of reproduce-table criteria, careful attention should be paid to the source from which cases will be recruited.

These are, broadly speaking two sources:

(1) Case records from a hospital or hospitals, and

(2) Patients having the disease in a specified period of time.

The use of case records as source material for retrospective studies can suffer one or more of three drawbacks:

a. First, the case records may often be incomplete.

b. Secondly, there may be lack of uniformity in the recording to information for different patients, more detailed information being recorded for, say, cases with complications or multiple conditions.

c. Thirdly, a serious drawback of routine case records from hospitals is that they contain information only about factors that the clinician thinks are associated with the disease in question, they may merely reflect preexisting ideas about etiology.

The method of choice, therefore, is to select a sample of patients (ideally all patients suffering from the disease) from the population at a point or during a period of time. Use of hospital as the source of cases. But may be more convenient, may give rise to ‘selection bias’.

For example, not all cases may report to hospitals, resulting in some sort of self-selection affecting the representativeness of the suspected etiological factors themselves are associated with chances of admission to the hospital, selecting cases from the hospital can give rise to a spurious association.

Nevertheless, if a case-control study is designed to explore a series of hypothesis rather than test definitive hypothesis about a small number of factors—one may consider selection of cases from a hospital.

One further point of importance to note in the selection of ‘Cases’ is that it is advisable to use only newly diagnosed cases rather than previously diagnosed cases (or a ‘prevalence’ sample, as it is called).

The ‘prevalence’ sample excludes those that have died due to the disease and may, therefore, lead to a misinterpretation of factors conducive to long survival of patients as factors associated with excess liability to the disease.

Consider the following purely hypothe­tical illustration:

Assume that risk from all disease except disease X can be eliminated. Now consider a person at time ‘T’ being alive and free from disease X.

At a subsequent time ‘T’ this person can be in one of the following three states:

1. Alive without disease X

Now, suppose, we consider ‘factor Y’ as a potential etiological factor.

We can choose proba­bilities of contracting disease X and dying from it as shown below in the table:

A case-control study that used ‘prevalence’ sample at time T would exclude the ‘dead’ cases and incorrectly draw the inference that factor Y was an etiological factor while, in truth (in the hypothetical situation), Y is beneficial to X.

Even when “newly diagnosed” (instead of ‘prevalence’ cases) are included for study, care should be taken to see that in cases where the disease or condition had existed for some period before the diagnosis, factors associated with the course of illness are not identified as etiological factors. In other words, care should be taken to get information on the existence of the etiological factors in such cases before the disease started in them.

A further point for consideration is the possibility of selecting ‘cases’ on the basis of preliminary (or admission) diagnosis. They can help in identifying ‘interviewer bias’ by comparing information on ‘ultimately confirmed cases’ and ‘initially incorrectly diagnosed’ cases.

A final point of interest to note is that if the case fatality of a disease showed a strong differential between those who possessed the possible etiological factor under investigation and those who did not and, if the disease was rapidly fatal, a case-control study is not advisable. A ‘Cohort Study’ is advisable in such a situation.

Selection of Controls:

The function of a control group in a case-control study is to provide an estimate of the ‘expected’ exposure in the case group. It is, therefore, essential that the Control group be comparable with the Case group in all relevant respects except that the Controls do not suffer from the disease being studied.

Ideally, for a Case-control study, all cases of a disease occurring in a defined geographic area should form the Case group and a random sample of the general population should form the Control group. The point is that the Control group should be free of selection bias.

For example, if you are doing a Case-control study on aspirin use and myocardial infarction, you cannot use a set of arthritis patients as your controls or a set of peptic ulcer patients as the controls since neither of them would be representative of the general population with reference to use of aspirin.

On occasions, one may not be able to define the ‘universe’, for example, if case are to be picked from a hospital or hospitals, the ‘universe’ is not well-defined. In such a situation, choose cases and controls from the same source. Hospital controls usually are less affected by information bias than general population controls.

The question of matching—frequency match­ing or individual matching, also should be considered carefully in selecting a Control group. Matching is done for controlling potentially con­founding variables. (Effects of matched variables can, therefore, be not evaluated). When the controls are so selected that they have the same proportionate distribution of the matching variables as the case group, it is called ‘frequency matching’.

On the other hand, if an individual case is matched for one or more variables with an individual (or more than one) control, we have ‘paired’ matching. Matching on too many factors is neither feasible nor desirable. Lack of matching can be taken care of at the analysis stage through either stratification or by applying the analysis of covariance technique.

Statistical Assessment of Association:

The statistical significance of association is tested by using an appropriate X 2 test. (In individually matched cases and controls, McNemar’s X 2 is used instead of the usual X 2 test).

The strength of the association is measured where applicable, using a measure of ‘Relative Risk’ (RR). The relative risk is defined as the ratio of the incidence in the Case group to the incidence in the Control Group 1 Case/1 control.

However the case-control study, since it starts with an observed effect and not the suspected Cause, cannot get a direct estimate of ‘Incidence’ and cannot get a direct estimate of relative risk. An estimate of the relative risk can be calculated from the Case- control study by working out what is called the ‘Odds-Ratio’ (OR).

The table below shows the computation of the Odds Ratio:

An estimate of relative risk is given by the ratio:

This estimate is a reasonably good approximation to the ‘relative risk’ if the incidence of the disease is relatively low (about less than 5%).

Advantages of Case-Control Studies:

(1) Useful in health problems of low incidence.

(2) Useful in health problems with long latent interval.

(3) The study period is usually short and sampling is convenient making the study less expensive in time and resources.

(4) Can examine simultaneously a number of different hypotheses.

Disadvantages of Case-Control Studies:

(1) Considerable potential for selection bias exists.

(2) Information bias could occur since data are collected by recall after the occurrence of the disease. Also, if records are used, they could be incomplete.

(3) Cannot examine all possible outcomes of health exposure since ‘outcome’ is the starting point for the study.

(4) Cannot provide direct estimate of incidence or relative risk.

Selection of Study Cohorts:

The comparison groups in Cohort studies are selected according to ‘Exposure’ (Exposed vs. Non-Exposed Groups) and the groups are followed in identical fashion to observe the development of a health problem over a defined period of time.

Selection of groups of individuals for Cohort Studies may arise in a variety of ways.

(1) Groups that have experienced some special exposure, the results of which are to be evaluated,

(2) Groups that offer advantages for following-up because of special facilities or groups that facilitate identification of particular outcomes among its members and

(3) A combination of both the above reasons.

Occupational groups with heavy exposure to chemicals like workers in dye-stuff industry who we know now-run a high risk of urinary bladder cancer-groups or individuals exposed to heavy ionizing radiation during war, patients exposed to ionizing radiation are examples of special exposure groups.

Studies on survivors of atomic bombing of Hiroshima and Nagasaki are a particular example of a fruitful Cohort study of special exposure group. Persons enrolled in health insurance schemes. Obstetric populations, Army populations are all examples of groups that are easy to be followed up or that offer special facilities for identification of outcomes.

Exposure Data — Sources of Information:

There are three sources of information for exposure data:

(2) Information collected from individuals in the group.

(3) Information by medical examination or other special investigation.

It may be necessary, in a given study, to use information from all the above sources to get complete data on exposure. It is possible however, that in some situations only one of the above sources is reliable.

For example, dose of radiation received by an individual as part of medical therapy can be obtained reliably only from the medical record, information on exposure variables like blood pressure or body build or blood chemistry values can be obtained only by medical investigation. Careful thought should be given deciding the most reliable source of exposure information.

A very important aspect of obtaining exposure data is the question of ‘Non-response’ from individuals in a group. One should almost always expect some degree of non-response from population groups. The true relationship between exposure and outcome will be affected in the cohort study only if the non-response is selected with respect to both exposure and outcome.

Since this is not easy to determine, special efforts are needed:

(a) To collect exposure data on sub sample of non-respondents,

(b) To compare non-respondents with respondents on general information like Age, Sex, etc., which can be obtained from out­side the source of exposure information and

(c) Follow-up, where feasible, of non-respondents (as well as respondents) to compare outcomes like deaths or hospital admission, etc. in the two groups.

Selection of Comparison Groups:

1. Internal Comparison Groups:

A single cohort is selected for the study and its members are classified into exposure categories, on the basis of information obtained on these at entry to the study.

For example, all pregnant women in a defined geographic area during a specified period from the cohort, to study the association of pre-natal factors (including past pregnancy history), and low birth weight. The women can, on entry to the study, be classified according to their pre-natal status and comparisons made.

2. Comparison with Population Rates:

Outcomes in a study cohort may be compared with experience of the general population during the period of the study. This is particularly true when ‘special exposure cohorts’—like the ones in Japan after atomic bombing —are studied.

3. Comparison with another Specially Selected Cohort:

An unexposed group similar in demographic characteristics to the exposed group may be selected.

Outcome Determination:

The procedures for ascertaining outcomes will naturally differ depending on their nature. The procedures may vary from collection of information from routine records to periodic medical examination/investigation of each member of the study and comparison groups.

It is important to note that completeness of ascertainment of outcomes should be same in both study and comparison groups. One point to be kept in mind is the possibility that the diagnosis of outcome may be influenced by exposure class. ‘Blind’ readings for outcomes with elements of subjectivity should be used wherever feasible. Objective tests should be used to assess outcomes, wherever feasible.

Two Problems Need Attention:

(1) To what extent are the observed difference (or lack of difference) between the comparison groups due to methodologic difficulties, and

(2) Whether observed differences in outcome between exposure categories can be taken as reflecting causal relationship between exposure and outcome. The first is related to points discussed in the ‘Selection of the comparison cohorts’.

Strength of association, consistency in repetitive studies under differing conditions, evidence of a dose-response relation­ship, biological plausibility and other ancillary evidence need to be looked at before a ‘causal relationship’ can be informed from an epidemio­logic study. The size of the ‘relative risk’ is a better index of a causal relationship between exposure and outcome than the attributable risk.

Advantages of Cohort Studies:

Generally free from certain selection biases:

Time sequencing that is the causal factor occurring before effects occur is automatically taken care of. Since information on ‘exposure’ is collected before observing effects, interview bias is eliminated.

(2) A spectrum of outcomes from a particular exposure can be observed.

(3) Provides direct estimates of incidence rates and relative risks.

Disadvantages of Cohort Studies:

(1) Can examine only one hypothesis at a time. New hypothesis cannot be generated after the start of the study, since ‘exposure’ is the starting point for the study.

(2) Inefficient—both statistically and practically for studying rare diseases.

Essay # 2. Experimental Epidemiological Studies:

Randomised Clinical Trials (RCT):

The Randomised Clinical Trial (RCT) differs from the Cohort study in one essential respect. The ‘exposure’ status of individuals admitted to an RCT is decided on the basis of a random allocation.

In other words, eligible individuals are randomly allocated to ‘exposure’ categories— Placebo and Treatment Groups—and the effects of exposure and non-exposure (or different degrees of exposure) on outcome during a defined period compared statistically.

The following points need attention in planning and RCT:

(1) Reason for the Study:

(a) Rationale for the study (including consideration of ethical aspects of the study).

(b) Objective(s)—Classified into ‘Primary’ and ‘Secondary’.

(2) Individuals to be Admitted to the Study:

(a) What is the universe to which the results are to be generalized.

(b) Diagnostic criteria—Reliability, validity and standardisation.

(3) Criteria for excusing individuals form admission to the RCT.

(4) Design of the Study:

(a) Procedure of random allocation Pre-Stratification, if feasible.

(b) Need for placebo groups.

(c) Need for a feasibility of ‘Double blind’ and ‘Blind’ procedures in pre-and post- treatment assessments.

(d) Treatment regimens—dose, rhythm, duration.

(e) Exclusion of individuals from trial after admission into the RCT, because of side-effects, deterioration in clinical condition, etc. Objective definitions to be used for toxicity.

(f) Any freedom to treating physician to alter mode of therapy—specifically what is not permitted.


Mortality and Sickle Cell Disease in Africa

Reports of high rates of childhood mortality, 50%�%, among African children with SCD are specific to SS.1𠄷,10� Qualitatively, experts have stated that the “majority,� “most,�,26 the “vast majority,� or “nearly all� Africans born with SS die during childhood. From the 1950s onward, multiple researchers have reported an almost total absence of SS among samples of African adults,10,29� whereas other investigators have reported finding a frequency of SS among African adults of reproductive age of up to 20% of the expected number.34� The impact of SC on child survival is less clear. Evidence of raised mortality has been reported in some studies,39 but not in others.40

The purposes of this review are: (1) to describe the methods that can be used to estimate SCD mortality (2) to assess the available data on child survival among SS individuals in Africa and (3) to discuss prospects for improving the current estimates through the collection and analysis of additional data. We reviewed and interpreted recent data that could provide insights into the current rates of survival among children with SCD in Africa, much of which has not been cited in previous expert discussions.


PUBLIC HEALTH AND ETHICAL CONSIDERATIONS IN PLANNING FOR QUARANTINE 18

Martin Cetron, M.D. 19

Centers for Disease Control and Prevention 20

Julius Landwirth, M.D., J.D. 21

Quarantine is one of the oldest, most effective, most feared, and most misunderstood methods of controlling communicable disease outbreaks. Its etymological roots are traceable to fourteenth century public health practices requiring ships arriving in Venice from plague-infected ports to sit at anchor for 40 days (hence, quarantine) before disembarking their surviving passengers. While in recent times the use of quarantine has been more humane and scientifically based, the historical association with exile and death and the morally negative connotation of sacrifice of a few for the benefit of others remains as an undercurrent of public apprehension. Nevertheless, quarantine was recently implemented successfully in several countries as a socially acceptable measure during the SARS epidemic in 2003 (Cetron and Simone, 2004). It is an important component of the Department of Health and Human Services (HHS) Pandemic Influenza Plan issued in November, 2005 (HHS, 2006a). 22

The purpose of this article is to review the modern public health approach to quarantine, outline highlights of current plans for its implementation in the event of an avian influenza pandemic, and consider the ethical principles that should be considered.

Definitions

Quarantine is the restriction of persons who are presumed to have been exposed to a contagious disease but are not ill. It may be applied at the individual, group, or community level and usually involves restriction to the home or designated facility. Quarantine may be voluntary or mandatory.

Isolation is the separation of ill persons with contagious diseases. It may be applied at the individual, group, or community level.

Quarantine of groups refers to quarantine of people who have been exposed to the same source of illness (e.g., at public gatherings, airline, school, workplace).

Working quarantine refers to persons who are at occupational risk of influenza infection, such as health-care workers, who may be restricted to their homes or designated facilities during off duty hours.

Community-wide quarantine refers to closing of community borders or the erection of a real or virtual barrier around a geographic area (cordon sanitaire).

Modern public health places quarantine within a broader spectrum of interventions generally referred to as “social distancing.”

The effect of successful measures to increase social distance is to convert a dynamic of exponentiation in the spread of an infectious agent to one of suppression in which the number of secondary cases from exposed persons is reduced to a manageable level. Time is the key variable in the success or failure of social distancing strategies, including the duration of communicability, whether or not communicability occurs before onset of symptoms, the number of resulting contacts, and the efficiency of or delays in contact tracing.

Globalization of travel and trade, and decreased travel time between distant places have further complicated these relationships. There are several hundred international ports of entry airports in the United States. Fortunately, 25 of these airports account for approximately 85 percent of international arrivals. Detailed recommendations for travel-related containment measures can be found in the full HHS report and will not be further elaborated here.

Principles of Modern Quarantine

In the months before adequate supplies of vaccines and antiviral agents are expected to be available, quarantine and isolation are likely to be the mainstays of containment strategies.

The HHS plan states that: The goal of quarantine is to protect the public by separating those exposed to dangerous communicable disease from the general population. It represents collective action for the common good that is predicated on aiding individuals who are already infected or exposed and protecting others from inadvertent exposure (HHS, 2006b).

Principles of modern quarantine and social distancing limit their use to situations involving highly dangerous and contagious diseases and when resources are reliably available to implement and maintain the measures. It encompasses a wide range of strategies to reduce transmission that may be implemented along a continuum based on phase and intensity of an outbreak.

For example, at a stage when transmission of a novel influenza virus is still limited, either abroad or in the area, and local cases are either imported or have clear epidemiological links to other cases, individual quarantine of close contacts may be effective. At a more advanced phase of the pandemic, however, when virus transmission in the area is sustained and epidemiological links to other known cases is unclear, limiting quarantine to exposed individuals may be ineffective, and the strategy may need to expand to include community-based interventions that increase social distance. These include school closings, cancellation of public gatherings, encouraging non-essential workers to stay home, and reduced holiday transportation schedules. If these measures are believed to be ineffective, community-wide quarantine may need to be implemented.

The HHS guidelines cite two important principles designed to help ensure that those in quarantine are not placed at increased risk. First, quarantined individuals will be closely monitored, with daily visits as needed, in order to detect earliest onset of symptoms and separation from those who are well. Second, persons in isolation will be among the first to receive any disease-prevention interventions. In addition, the HHS plan recommends that they should be provided with all needed support services, including psychological support, food and water, and household and medical supplies.

Home quarantine is the preferred method of separation, whenever possible. Designated quarantine facilities may have to be identified for potentially affected persons who do not have access to an appropriate home environment, such as persons living in dormitories, travelers, the homeless, or if the configuration of the home is not suitable for the protection of the potentially infected person and other occupants.

Voluntary quarantine is the preferred first option before resorting to mandatory orders or surveillance devices. In this connection, it is noteworthy that quarantine does not require 100 percent compliance to be effective. Toronto Public Health officials reported only 22 orders for mandatory detainment among the approximately 30,000 persons who were quarantined (Upshur, 2003).

Legal and Ethical Considerations

Primary responsibility for public health matters within their borders rests with state and local governments. This includes isolation and quarantine. Applicable state laws, regulations and procedures vary widely. A recently developed Model State Emergency Health Powers Act attempts to promote greater interstate consistency in response to emergency public health situations (Center for Law and the Public’s Health, 2001). In the section on isolation and quarantine, the Model Act covers the principles and conditions governing implementation of quarantine authorization of public health authorities to impose temporary quarantine by directive, with rights of appeal within 10 days imposition of quarantine with notice following a public health authority court petition and hearing and legal procedures for release from quarantine or relief from violations of conditions of quarantine. Although it has been criticized by some as being overly broad in its coercive powers (Annas, 2002 Mariner, 2001) the Model Act has been adopted in whole or part in a number of jurisdictions.

The federal Public Health Service Act (U.S. Congress, 1946) gives the HHS secretary responsibility for preventing introduction, transmission, and spread of communicable diseases from foreign countries into the United States and within the United States and its territories/possessions. This authority is delegated to the Centers for Disease Control and Prevention (CDC), which are empowered to detain, medically examine, or conditionally release individuals reasonably believed to be carrying a communicable disease. The Public Health Service Act also provides that the list of diseases for which quarantine is authorized must first be specified in an executive order of the president, on recommendation of the HHS secretary (CDC, 2006). On April 5, 2005, influenza caused by a novel or reemergent strain that is causing or has the potential to cause a pandemic was added to that list (White House, 2005).

Although the discipline of public health has its origins several centuries ago, it is only relatively recently that ethical principles and codes to guide public health practice and policy have been formulated. The ethical principles at the heart of the more fully developed fields of medical and research ethics are grounded in the primacy of individual autonomy in clinical decision-making in the therapeutic setting and in consent for participation in the setting of human subjects research. They are guided by a fundamental moral axiom that individual persons are valued as ends in themselves and should never be used merely as means to another’s ends. Public health, on the other hand, emphasizes collective action for the good of the community.

The Principles of the Ethical Practice of Public Health, issued by the Public Health Leadership Society in 2002 (Public Health Leadership Society, 2002), states that community health should be achieved in a way that respects the rights of individuals and the community. Accompanying notes are instructive:

This principle identifies the common need in public health to weigh the concerns of both the individual and the community. There is no ethical principle that can provide a solution to this perennial tension in public health. We can highlight, however, that the interest of the community is part of the equation, and for public health it is the starting place in the equation it is the primary interest of public health. Still, there remains the need to pay attention to the rights of individuals when exercising the police powers of public health (Public Health Leadership Society, 2002).

To address this potential dichotomy, the principles require ensuring opportunity for informed community participation in the development of policies, programs, and priorities, accessibility to basic resources and conditions necessary for health, and protection of confidentiality.

Principles of practice, law and ethics in the containment of outbreaks of infectious disease, especially use of quarantine, confront a common underlying concern, namely,

The individual fear and community panic associated with infectious diseases often leads to rapid, emotionally driven decision making about public health policies needed to protect the community that may be in conflict with current bioethical principles regarding care of individual patients (Smith et al., 2004)

In November 2005, the Council on Ethical and Judicial Affairs of the American Medical Association issued recommendations for the medical profession in the use of quarantine and isolation as public health interventions. Again, the tensions between the ethical imperatives of therapeutic medicine and public health are reflected in the following excerpts:

Quarantine and isolation to protect the population’s health potentially conflict with the individual rights of liberty and self-determination. The medical profession, in collaboration with public health colleagues, must take an active role in ensuring that those interventions are based on science and are applied according to certain ethical considerations. . . . Individual physicians should participate in the implementation of appropriate quarantine and isolation measures as part of their obligation to provide medical care during epidemics. . . . In doing so, advocacy for their individual patients’ interests remain paramount (Council on Ethical and Judicial Affairs, 2005).

An important rationale for acknowledging and attempting to ameliorate this tension in pandemic preparedness planning, including quarantine measures, is to reduce the potential for unfair distribution of burdens and benefits among various segments of society (Markovits, 2005). In an important contribution, Susan Kass has developed a six-step framework for ethical analysis specifically for public health (Kass, 2001). The application of this general framework to quarantine is discussed in detail elsewhere in these proceedings.

Ross Upshur has outlined four principles that must be met to justify quarantine (Upshur, 2002):

First, under the harm principle there must be clear scientific evidence of person-to-person spread of the disease and the necessity of quarantine as a containment measure. Second, the least restrictive means should be implemented. Third, upholding the principle of reciprocity points to the community’s obligation to provide necessary support services for those in quarantine. Fourth, the obligation of public health authorities is to communicate the reasons for their actions and to allow for a process of appeal. In November 2004, the World Health Organization issued a checklist for influenza pandemic preparedness. It encourages planners to 𠇌onsider the ethical issues related to limiting personal freedom, such as may occur with isolation and quarantine” (WHO, 2005a).

An instructive example of how ethical considerations can be incorporated into pandemic preparedness plans can be found in the Ontario Health Plan for an Influenza Pandemic (Ontario Ministry of Health and Longterm Care, 2005). The development of this plan included a collaboration with the Toronto Joint Centre for Bioethics, which produced a 15-point ethical guide for decision making for a pandemic (University of Toronto Joint Centre for Bioethics Pandemic Influenza Working Group, 2005). The guide identified four key ethical issues in pandemic preparedness planning, one of which was “restricting liberty in the interest of public health by measures such as quarantine.” The guide describes the following substantive and procedural ethical values at stake in addressing this issue:

Procedures should be reasonable, with reasons for decisions shared with stakeholders open and transparent inclusive, with stakeholder participation responsive, subject to review and revision with experience and accountable.

Based on these principles, the guide recommended the following:

Past experience has shown that voluntary cooperation and public trust are key ingredients of successful response to a public health emergency. They may be important antidotes to individual fear and community panic that may be engendered by infectious disease outbreaks. Careful attention to the ethical values at stake in public health decision making can help foster voluntary cooperation and public trust and should be a part of state and federal pandemic preparedness planning.


Hygiene promotion and social mobilisation

Health education campaigns, adapted to local culture and beliefs, should promote the adoption of appropriate hygiene practices such as hand-washing with soap, safe preparation and storage of food and safe disposal of the faeces of children. Funeral practices for individuals who die from cholera must be adapted to prevent infection among attendees.

Further, awareness campaigns should be organised during outbreaks, and information should be provided to the community about the potential risks and symptoms of cholera, precautions to take to avoid cholera, when and where to report cases and to seek immediate treatment when symptoms appear. The location of appropriate treatment sites should also be shared.

Community engagement is key to long term changes in behaviour and to the control of cholera.


Selecting & Defining Cases and Controls

Careful thought should be given to the case definition to be used. If the definition is too broad or vague, it is easier to capture people with the outcome of interest, but a loose case definition will also capture people who do not have the disease. On the other hand, an overly restrictive case definition is employed, fewer cases will be captured, and the sample size may be limited. Investigators frequently wrestle with this problem during outbreak investigations. Initially, they will often use a somewhat broad definition in order to identify potential cases. However, as an outbreak investigation progresses, there is a tendency to narrow the case definition to make it more precise and specific, for example by requiring confirmation of the diagnosis by laboratory testing. In general, investigators conducting case-control studies should thoughtfully construct a definition that is as clear and specific as possible without being overly restrictive.

Investigators studying chronic diseases generally prefer newly diagnosed cases, because they tend to be more motivated to participate, may remember relevant exposures more accurately, and because it avoids complicating factors related to selection of longer duration (i.e., prevalent) cases. However, it is sometimes impossible to have an adequate sample size if only recent cases are enrolled.


Glossary

Acute Kidney Injury (AKI): Sudden and temporary loss of kidney function.

Chronic Kidney Disease (CKD): Any condition that causes reduced kidney function over a period of time. Chronic kidney disease may develop over many years and lead to end-stage kidney (or renal) disease (ESRD).

The five stages of CKD are:

  • Stage 1: Kidney damage with normal kidney function (estimated GFR ≥90 mL/min per 1.73 m 2 ) and persistent (≥3 months) proteinuria.
  • Stage 2: Kidney damage with mild loss of kidney function (estimated GFR 60-89 mL/min per 1.73 m 2 ) and persistent (≥3 months) proteinuria.
  • Stage 3: Mild-to-severe loss of kidney function (estimated GFR 30-59 mL/min per 1.73 m 2 ).
  • Stage 4: Severe loss of kidney function (estimated GFR 15-29 mL/min per 1.73 m 2 ).
  • Stage 5: Kidney failure requiring dialysis or transplant for survival. Also known as ESRD (estimated GFR <15 mL/min per 1.73 m 2 ).

Dialysis: Treatment to filter wastes and water from the blood. When their kidneys fail, people need dialysis to filter their blood artificially. The two main forms of dialysis are hemodialysis and peritoneal dialysis.

End-Stage Renal Disease (ESRD): Total and permanent kidney failure treated with a kidney transplant or dialysis.

Glomerular Filtration Rate (GFR): The rate at which the kidneys filter wastes and extra fluid from the blood measured in milliliters per minute.

Proteinuria: Condition in which the urine has more-than-normal amounts of a protein called albumin.

Fast Facts

  • The overall prevalence of CKD in the general population is approximately 14 percent.
  • High blood pressure and diabetes are the main causes of CKD. Almost half of individuals with CKD also have diabetes and/or self-reported cardiovascular disease (CVD).
  • More than 661,000 Americans have kidney failure. Of these, 468,000 individuals are on dialysis, and roughly 193,000 live with a functioning kidney transplant.
  • Kidney disease often has no symptoms in its early stages and can go undetected until it is very advanced. (For this reason, kidney disease is often referred to as a “silent disease.”)
  • The adjusted incidence rate of ESRD in the United States rose sharply in the 1980s and 1990s, leveled off in the early 2000s, and has declined slightly since its peak in 2006.
  • Compared to Caucasians, ESRD prevalence is about 3.7 times greater in African Americans, 1.4 times greater in Native Americans, and 1.5 times greater in Asian Americans.
  • Each year, kidney disease kills more people than breast or prostate cancer. In 2013, more than 47,000 Americans died from kidney disease. 1

Prevalence

  • The overall prevalence of CKD increased from 12 percent to 14 percent between 1988 and 1994 and from 1999 to 2004 but has remained relatively stable since 2004. The largest increase occurred in people with Stage 3 CKD, from 4.5 percent to 6.0 percent, since 1988.
  • Women (15.93 percent) are more likely to have stages 1 to 4 CKD than men (13.52 percent). 2

Age-Adjusted Prevalence of CKD Stages 1-4 by Gender 1999-2012 2

  • African Americans (17.01 percent) and Mexican Americans (15.29 percent) are more likely to have CKD than Caucasians (13.99 percent). 3

Age-Adjusted Prevalence of CKD Stages 1-4 by Race/Ethnicity 1999-2012 3

  • CKD often occurs in the context of multiple comorbidities and has been termed a “disease multiplier.” Almost half of individuals with CKD also have diabetes and self-reported CVD.

Distribution of NHANES participants with diabetes, self-reported cardiovascular disease, and single-sample markers of CKD, 2007-2012

Abbreviations: CKD, chronic kidney disease CVD, cardiovascular disease DM, diabetes mellitus

CKD Awareness

  • CKD often has no symptoms in its early stages and can go undetected until it is very advanced. (For this reason, kidney disease is often referred to as a “silent disease”).

NHANES participants with CKD aware of their kidney disease, 2001-2012

  • Patient awareness is less than 10 percent for those with stages 1 to 3 CKD.
  • Awareness is higher among people with Stage 4 CKD, who often experience obvious symptoms.

CVD and CKD

  • People with CKD are at high risk for CVD, and the presence of CKD often complicates CVD treatment and prognosis.
  • The prevalence of CVD is 69.6 percent among persons ages 66 and older who have CKD, compared to 34.7 percent among those who do not have CKD.
  • Atherosclerotic heart disease is the most frequent CVD linked to CKD its prevalence is more than 40 percent among people ages 66 and older.
  • The percentage of people who undergo cardiovascular procedures is higher among those with CKD than among those without CKD.

Acute Kidney Injury (AKI)

  • In 2013, the unadjusted rate of AKI hospitalizations in the Medicare population fell by 4.9 percent. This decrease was observed across all age and race groups.
  • For Medicare patients ages 66 and older with an AKI hospitalization in 2011, the cumulative probability of a recurrent AKI hospitalization within 2 years was 48 percent.
  • Among Medicare patients ages 66 and older with a first AKI hospitalization, the in-hospital mortality rate in 2013 was 9.5 percent (or 14.4 percent when including discharge to hospice), and less than half of all patients were discharged to their home.

Hospital discharge status of first AKI hospitalization for Medicare patients aged 66+, 2013

  • After rising steadily from 1980 to 2001, the incidence rate of ESRD leveled off in the early 2000s and has declined slightly since 2006.
  • The number of incident (newly reported) ESRD cases in 2013 was 117,162 the unadjusted incidence rate was 363 per million/year.
  • Although the number of ESRD incident cases plateaued in 2010, the number of ESRD prevalent cases continues to rise by about 21,000 cases per year.
  • African-Americans are about 3.5 times more likely to develop ESRD than Caucasians.
  • Hispanics are about 1.5 times more likely to develop ESRD than non-Hispanics.
  • In 2013, 88.2 percent of all incident cases began renal replacement therapy with hemodialysis, 9.0 percent started with peritoneal dialysis, and 2.6 percent received a preemptive kidney transplant.
  • As of December 31, 2013, 63.7 percent of all prevalent ESRD cases were receiving hemodialysis therapy, 6.8 percent were being treated with peritoneal dialysis, and 29.2 percent had a functioning kidney transplant.
  • The leading causes of ESRD in children during 2009-2013 were: cystic/hereditary/congenital disorders (33.0 percent), glomerular disease (24.6 percent), and secondary causes of glomerulonephritis (12.9. percent).
  • A total of 1,462 children in the United States began ESRD care in 2013, and 9,921 children were treated for ESRD on December 31, 2013. The most common initial ESRD treatment modality among children overall was hemodialysis (56 percent).
  • The number of children listed for incident and repeat kidney transplant was 1,277 in 2013.

Kidney Transplants

  • 17,600 kidney transplants were performed in the United States in 2013.
  • Less than one-third of the transplanted kidneys were from living donors in 2013.
  • From 2012 to 2013, there was a 3.1 percent increase in the cumulative number of recipients with a functioning kidney transplant.
  • Among candidates newly wait-listed for either a first-time or repeat kidney-alone transplant in 2009, the median waiting time to transplant was 3.6 years.
  • The number of deceased donors increased significantly since 2003, reaching 8,021 in 2013.
  • The rate of deceased donors among African Americans more than doubled from 1999 to 2013.
  • In 2012, the probability of 1-year graft survival was 92 percent and 97 percent for deceased and living donor kidney transplant recipients, respectively.
  • The probability of patient survival within 1 year post-transplant was 95 percent and 98 percent in deceased and living donor kidney transplant recipients, respectively, in 2012.
  • Since 1996, the probabilities of graft survival and patient survival have steadily improved among recipients of both living and deceased donor kidney transplants.

Morbidity

  • A notable decrease in hospitalization rates occurred from 2012 to 2013. Rates decreased by 11 percent for CKD patients and by 10.1 percent for those without CKD. However, rates of both overall and cause-specific admissions increased with advancing stages of CKD.
  • Hospitalizations increased among CKD patients with the presence of diabetes and CVD.
  • Among hemodialysis patients, the overall number of hospitalizations for ESRD in 2013 was 1.7 admissions per patient year—down from 2.1 in 2005.
  • Rates of readmission for CKD patients were higher than those for patients without CKD. In 2013, 22.3 percent of patients with CKD were readmitted to the hospital within 30 days, compared to only 15.8 percent of those without CKD.

Adjusted percentage of patients readmitted to the hospital within 30 days of discharge, among Medicare CKD patients aged 66+, discharged alive from an all-cause index hospitalization between January 1 and December 1, by year, 2001-2013

Mortality

  • In 2013, adjusted mortality rates remained higher for Medicare patients with CKD (117.9/1,000) than for those without CKD (47.5/1,000) and these rates increased with CKD severity, although this gap narrowed during the period 2001 to 2013.
  • Male patients had slightly higher mortality rates (52.6/1,000) than females (43.4/1,000) this was more prevalent among those with CKD (males: 128.7/1,000 females: 110.0/1,000).
  • Mortality rates continue to decrease for dialysis and transplant patients, having fallen by 28 percent and 40 percent, respectively, since 1996.
  • CVD contributes to more than half of all deaths among patients with ESRD. Arrhythmias and cardiac arrest alone were responsible for more than one-third (37 percent) of CVD deaths.

Causes of death in ESRD patients, 2012-2014

  • Overall, the rates of death among Medicare patients with CKD are declining however, they are higher in patients with CKD than without CKD.
  • The presence of diabetes and CVD along with CKD increases the risk of death.

All-cause mortality rates (per 1,000 patient years at risk) for Medicare patients aged 66+, by CKD status and year, 2001-2013 (adjusted)

Medicare Spending for CKD

  • Medicare spending for patients with CKD ages 65 and older exceeded $50 billion in 2013 and represented 20 percent of all Medicare spending in this age group.
  • More than 70 percent of Medicare spending for CKD patients ages 65 and older was incurred by those who also had diabetes, congestive heart failure, or both.
  • Although spending was 12.7 percent higher for African Americans than Caucasians in 2013, this represented a reduction from the 19.6 percent gap that occurred in 2010.
  • Spending was more than twice as high for patients with all three chronic conditions of CKD, diabetes, and congestive heart failure ($38,230) than in patients with only CKD ($15,614).
  • Medicare fee-for-service spending for ESRD beneficiaries rose by 1.6 percent, from $30.4 billion in 2012 to $30.9 billion in 2013, accounting for 7.1 percent of the overall Medicare paid claims costs.

Additional Information

  • Statistics on Kidney Disease in the United States: U.S. Renal Data Service from the National Institute of Diabetes and Digestive and Kidney Diseases

Alcohol Research & Health. 200327(3): 209-219.

Robert E. Mann, Ph.D., Reginald G. Smart, Ph.D., and Richard Govoni, Ph.D.

Robert E. Mann, Ph.D., is a senior scientist in the Department of Social, Prevention and Health Policy Research at the Centre for Addiction and Mental Health and an associate professor in the Department of Public Health Sciences at the University of Toronto, both in Toronto, Canada.

Reginald G. Smart, Ph.D., is a principal and senior scientist in the Department of Social, Prevention and Health Policy Research at the Centre for Addiction and Mental Health in Toronto, Canada.

Richard Govoni, Ph.D., is a research fellow in the Department of Public Health Sciences at the University of Toronto and an assistant professor in the Department of Psychology at the University of Windsor in Windsor, Canada.

The preparation of this work was supported in part by a fellowship to R. Govoni from the Ontario Problem Gambling Research Centre.

This article describes the various forms of alcoholic liver disease (ALD), with particular emphasis on cirrhosis, the form of liver disease that often is most associated with alcohol abuse and about which the most information is available. Epidemiological research has evaluated the prevalence of ALD and the factors that often contribute to the disease. Although the most potent factor in ALD is the excessive consumption of alcoholic beverages, gender and ethnic differences also account for some important variations in rates of liver disease. Mortality rates from cirrhosis have declined in the United States and some other countries since the 1970s. A number of factors may have contributed to this decline, including increased participation in treatment for alcohol problems and Alcoholics Anonymous membership, decreases in alcohol consumption, and changes in the consumption of certain types of alcoholic beverages. Key words: alcoholic liver cirrhosis epidemiological indicators gender differences ethnic differences AODR (alcohol and other drug related) mortality morbidity AOD (alcohol and other drug) use pattern risk factors trend aggregate AOD consumption beneficial vs adverse drug effect Alcoholics Anonymous United States survey of research

One of the most enduring insights into the effects of alcohol has been the assertion that heavy alcohol consumption increases mortality rates, especially those from cirrhosis of the liver and other forms of liver disease (see the sidebar, below). The scientific study of alcohol–related mortality began in the 1920s with Pearl’s studies (1926) of death rates among various types of drinkers. He and others found that heavy drinkers had higher rates of overall mortality and of mortality from cirrhosis than did lighter drinkers or abstainers. Since then, mortality studies have continued to demonstrate that heavy drinkers and alcoholics die from cirrhosis at a much higher rate than the general population (Mann et al. 1993 Pell and D’Alonzo 1973 Schmidt and de Lint 1972 Thun et al. 1997). In addition, laboratory studies conducted in the 1930s established that feeding large amounts of alcohol to rats and other animals caused liver disease (Lelbach 1974).

Types of Alcoholic Liver Disease

Drinking Patterns and Alcoholic Liver Disease

Many studies show that the amount of alcohol consumed and the duration of that consumption are closely associated with cirrhosis. 1 ( 1 In examining trends in alcoholic liver disease, some authors have considered only those cases directly attributable to alcohol [e.g., Douds et al. 2003]. Other authors have determined that many cirrhosis deaths coded as not involving alcohol are in fact alcohol related [particularly for some age groups, including the middle–aged] thus, these authors have examined total cirrhosis deaths when evaluating trends [e.g., Ramstedt 2001].) One of the best demonstrations of this association was presented by Lelbach (1974), who studied 319 patients in an alcoholism clinic in Germany. He calculated the average amount of alcohol consumed per hour in a 24–hour day. As shown in table 1, patients with normal liver function consumed far less alcohol than did those with cirrhosis. Those who did not have cirrhosis but did have other liver malfunctions had intermediate rates of alcohol intake. In addition, patients with normal liver function had been drinking heavily for only about 8 years on average, whereas those with cirrhosis had been drinking heavily for more than 17 years on average. As this research illustrates, the risk of developing cirrhosis is a function of both quantity and duration of alcohol consumption. Bellentani and Tiribelli (2001) recently proposed that cirrhosis does not develop below a lifetime alcohol ingestion of 100 kg of undiluted alcohol. This amount corresponds to an average daily intake of 30 grams of alcohol (between two and three drinks 2 [ 2 The National Institute on Alcohol Abuse and Alcoholism (NIAAA) defines a standard drink as 11󈝺 grams (g) of alcohol, which corresponds to approximately one shot of 80–proof alcohol (about 14 g alcohol), one glass of wine (11 g), or one 12–oz beer (12.8 g).]) for 10 years. These investigators also noted that consuming alcohol with food resulted in somewhat lower levels of risk than consuming alcohol by itself.

Table 1 Liver Function and Alcohol Intake

Liver Function

No. of Cases

Mean Daily Alcohol Intake (milligrams of alcohol/kilograms of body weight) per Hour

Average Duration of Alcohol Abuse
(years)

Normal liver function

70

90

7.7

Uncomplicated fatty liver

118

109

7.8

Severe steatofibrosis with inflammatory reactions

48

127

10.3

Chronic alcoholic hepatitis

78

125

11.9

Cirrhosis

39

147

17.1

NOTES: Patients with normal liver function consumed far less alcohol and had been drinking for fewer years than those with cirrhosis. Those who did not have cirrhosis but did have other liver malfunctions had intermediate rates of alcohol intake. See sidebar, p. 211, for definitions of alcoholic liver disease.

SOURCE: Adapted from Lelbach 1974.

More recent studies confirm the close association between alcohol consumption and cirrhosis risk. Anderson (1995) examined data from four case control studies in men and women. (Figure 1 shows results from representative studies [Coates et al. 1986, and Tuyns and Péquignot 1984].) This investigation showed that the risk of cirrhosis was related to the amount of alcohol consumption in every study. In addition, as alcohol consumption increased, the risk of cirrhosis increased more rapidly for females than it did for males. The link between gender and risk for cirrhosis is addressed in detail in the section on page 215.

Figure 1 Alcohol consumption and incidence of cirrhosis of the liver in men (m) and women (w). Studies have shown a close relationship between alcohol consumption and cirrhosis risk.

Note: Data truncated at 70 g/day.

Cirrhosis Morbidity and Mortality and Average Alcohol Consumption

The strong link between heavy or excessive alcohol use and the development of liver disease took on added significance in the middle of the 20th century, when several researchers began exploring cirrhosis as a potential marker for levels of alcohol problems in populations (Jellinek and Keller 1952 Ledermann 1956 Seeley 1960 Terris 1967). Of particular importance was the discovery of a relationship between cirrhosis mortality rates and per capita levels of alcohol consumption in the population. This relationship has proved to be remarkably strong and has been consistently observed across time periods and in various regions of the world (Bruun et al. 1975 Ramstedt 2001 Smart and Mann 1991). European researchers have observed a lagged relationship between cirrhosis mortality and consumption measures, with the rate of cirrhosis mortality in a year being influenced by the alcohol consumption rates of several previous years (Corrao 1998 Ramstedt 2001). To account for this effect, Skog (1980) developed a “distributed lag model,” in which the effects of alcohol consumption in a year are distributed over the next several years. Using this model, he was able to explain an apparent inverse relationship between consumption and cirrhosis mortality rates in Great Britain between 1931 and 1958 (Popham 1970). Incorporating the distributed lag model into the data produced the expected positive relationship between consumption and cirrhosis mortality.

Trends in Cirrhosis Mortality Rates

Liver cirrhosis is a major cause of death in the United States (Yoon et al. 2002 Minino et al. 2002). In 2000, it was the 12th leading cause of death, accounting for 1.1 percent of all deaths, with an age–adjusted death rate 3 of 9.6 per 100,000 population. ( 3 Age adjustment is a statistical method of adjusting for age differences, between populations or over time, that might otherwise distort mortality trends. In the case of chronic diseases, including cirrhosis of the liver, unadjusted mortality rates may appear to be higher for older populations than for younger populations because mortality rates are higher, on average, in older people.) Cirrhosis mortality rates vary substantially among age groups: They are very low among the young but increase considerably in middle age, reaching a peak of 31.1 per 100,000 among people ages 75 to 84. Because of the increase in cirrhosis mortality rates in middle age, the contribution of cirrhosis to total deaths reaches a peak in the 45󈞢 age group, for which it is the fourth leading cause of death. In relation to the cirrhosis mortality rate in other countries, the United States is in the middle range, as are countries such as Belgium and Canada (WHO 2000). Higher rates are seen in countries where people traditionally consume more alcohol than in the United States, such as Spain, France, and Italy. In countries where alcohol consumption is traditionally lower—Iceland, New Zealand, and Norway, for example—cirrhosis death rates are lower.

Cirrhosis mortality rates in the United States have changed substantially over time. Early in the 20th century, these rates were at their highest point. As shown in figure 2, overall cirrhosis mortality rates declined precipitously with the introduction of Prohibition. When Prohibition ended, alcohol consumption and cirrhosis mortality rates increased until the late 1960s and early 1970s, when these rates began to approach levels seen in the first decade of the century. However, in the mid�s cirrhosis mortality rates began to decline as they had with the introduction of Prohibition cirrhosis was the 8th leading cause of death in 1977 (Galambos 1985) but the 12th leading cause of death by 2000. Similar declines in cirrhosis mortality rates have been observed in many developed countries (including Canada, Sweden, France, and Italy), but in other developed countries (e.g., Great Britain, Finland, Denmark) cirrhosis death rates have increased (Ramstedt 2001). The reasons for the dramatic reductions remain a source of considerable interest, as will be discussed below.

Cirrhosis mortality rates may continue to decline if alcohol consumption rates remain low or fall further. However, the increase in cases of hepatitis C infection in the United States, which are predicted to peak by 2015 (Armstrong et al. 2000), may affect the rate of cirrhosis deaths. Because people infected with hepatitis C are more likely to develop cirrhosis when they drink, death rates from cirrhosis may increase in the future, even if drinking levels decline. (For more information on hepatitis C infection and alcohol, see the article by Schiff and Ozden in this issue.)

Reasons for Decreases in Cirrhosis Death Rates

  • Alcoholics seeking treatment drink an average of 160 g of undiluted alcohol per day.

  • About 14 percent of alcoholics will develop cirrhosis if they drink this quantity for a period of 8 years.

  • About 50 percent of alcoholics receiving treatment or attending AA meetings improve sufficiently to postpone the development of cirrhosis or avoid death if they already have cirrhosis.

Other Factors Associated With Increased Rates of Cirrhosis Morbidity and Mortality

Gender Differences

As discussed above, historically the epidemiology of cirrhosis has been linked closely to types and patterns of alcohol consumption. Other factors also may be at work in the development of liver disease. For example, there are important and long–standing gender differences in cirrhosis mortality risk and mortality rates. As shown in figure 2, cirrhosis mortality rates are about two times higher in men than in women. These rates reflect the fact that men typically drink more than women, and that the proportion of heavy drinkers and alcoholics is much higher among men. However, as noted previously, it also appears that at any given level of alcohol consumption, women have a higher likelihood of developing cirrhosis than men (see figure 1) (Tuyns and Péquignot 1984). This phenomenon is poorly understood, but several possible explanations have been offered. One is that levels of alcohol dehydrogenase, an enzyme involved in breaking down alcohol, may be lower in the stomachs of females than in males, which would result in a higher blood alcohol content for females than for males who consume equivalent amounts of alcohol (Frezza et al. 1990). Because damage to the liver is a function of blood alcohol levels and exposure time, factors that lead to higher blood alcohol concentrations could at least partially explain females’ higher risk for alcohol–related cirrhosis. Another possible explanation is that estrogen may increase the susceptibility of the liver to alcohol–related damage (Ikejima et al. 1998 Colantoni et al. 2003). Behavioral factors, including drinking patterns and diet, also may contribute to females’ higher cirrhosis risk.

Figure 2 Age–adjusted death rates of liver cirrhosis by gender, 1910� in death registration States, and 1933� in entire United States. U.S. cirrhosis mortality rates were high at the beginning of the 20th century, declined precipitously with the introduction of Prohibition, and increased again when Prohibition ended. Mortality rates continued to increase until the early to mid�s, when these rates began to approach the levels seen in the first decade of the century. In the mid�s cirrhosis mortality rates began to decline again, as they had with the introduction of Prohibition, and they have continued to decline.

INSET (shaded area): Per capita alcohol consumption for the years 1935 to 1999, illustrating the link between alcohol consumption and cirrhosis mortality.

SOURCES: Mortality rate data adapted from Yoon et al. 2001 consumption data from Nephew et al. 2002.

Genetic factors, including those that influence alcohol metabolism and risk for alcoholism, also may be involved in the increased risk for cirrhosis seen in women (Reed et al. 1996), but there still is considerable debate on this issue, and further research is needed on the nature and the extent
of such genetic contributions.

In a recent study, Corrao and colleagues (1998) found that 98.1 percent of cirrhosis cases in men but only 67.0 percent of cases in women could be attributed to alcohol consumption, hepatitis C, and hepatitis B. The risk factors for cirrhosis appear to be more complex for women than they are for men, and more research will be required to identify and understand them.

Ethnic Differences

Important differences in cirrhosis rates and cirrhosis mortality also exist among ethnic groups. Although ethnic group differences have been declining in recent years, cirrhosis rates remain higher for Blacks than for Whites in the United States (see figure 3), and the highest cirrhosis mortality rates currently are observed among Hispanic groups (Stinson et al. 2001). Although these differences in cirrhosis rates among Blacks, Whites, and Hispanics seem to suggest higher alcohol consumption levels among Hispanics and Blacks than among Whites, studies of alcohol consumption patterns in these groups tend not to support this interpretation. For example, in recent years, alcohol consumption among Blacks has been less than or comparable with that of Whites (see table 2).

Figure 3 Age–adjusted rates of alcohol–related cirrhosis by gender and ethnic group (Black, White, and Hispanic), United States, 1970�.

SOURCE: Yoon et al. 2001. (Categories shown in this figure were those used in the source study.)

Table 2 Consumption Patterns for Blacks and Whites, 1984 and 1992

Consumption Level

1984

1992

Blacks (%)

Whites (%)

Blacks (%)

Whites (%)

Males

Abstainer

29

23

35

28

Infrequent

13

13

6

9

Less frequent

12

16

19

21

Frequent

30

27

25

29

Frequent heavy

16

19

15

12

Females

Abstainer

46

31

51

36

Infrequent

18

23

24

22

Less frequent

19

19

12

24

Frequent

13

23

8

15

Frequent heavy

4

4

5

3

NOTES: In recent years, alcohol consumption among Blacks has been comparable to or less than that of Whites.
Some columns do not total 100 percent because of rounding.

SOURCE: Adapted from Jones–Webb 1998.

Several reasons for ethnic group differences in cirrhosis rates have been proposed, including demographic factors related to gender, age, income, education, and employment biological factors, such as family history of drinking problems and environmental factors, such as stress (for a review, see Jones–Webb 1998). Other suggested factors are differential access to alcoholism treatment services (Singh and Hoyert 2000), although as yet no data are available to support this explanation and differing rates of hepatitis C infection, which appears to be more prevalent among Hispanics than in Black and White populations (Yen et al. 2003). Ethnic group differences in cirrhosis risk and mortality may be linked to the possibility that, over time, general health status has improved more for some ethnic groups than others. However, as summarized in table 3, two general health indicators—age–adjusted death rate and life expectancy at birth—showed comparable gains for Blacks and Whites between 1970 and 2000. Thus, it is not yet possible to attribute changes in cirrhosis rates to changes in general health indicators of various groups.

Table 3 General Health Indicators for U.S. Blacks and Whites, 1970 and 2000

Black Males

White Males

Black Females

White Females

Age–Adjusted Death Rate per 100,000 Population*

1970

1,873.9

1,513.7

1,228.7

1,193.3

2000

1,377.8

1,018.2

947.9

739.1

Percent change

󈞆.5

󈞌.7

󈞂.9

󈞊.1

Life Expectancy (years)

1970

60.0

68.0

68.3

75.6

2000

68.2

74.8

74.9

80.0

Percent change

+13.7

+10.0

+9.7

+5.8

*Standardized to 2000 age distribution.

NOTE: Between 1970 and 2000, Blacks and Whites showed comparable gains in age–adjusted death rate and life expectancy at birth.

SOURCE: Minino et al. 2002.

As this discussion indicates, cirrhosis rates in subpopulations, such as those based on gender or ethnicity, can show significant deviations from the rates of cirrhosis that would be expected from alcohol consumption levels alone. These differences, which are not yet well understood, have important implications for research and prevention initiatives. From a public health perspective, an understanding of subpopulation dynamics is critical to the development of programs for preventing alcoholic liver disease.

Conclusion

Alcoholic liver disease is a major source of alcohol–related morbidity and mortality. Heavy drinkers and alcoholics may progress from fatty liver to alcoholic hepatitis to cirrhosis, and it is estimated that 10 percent to 15 percent of alcoholics will develop cirrhosis. The likelihood of developing ALD is, to a large extent, a function of both the duration and amount of heavy drinking, and the per capita consumption of alcohol within populations has been shown to be a strong determinant of cirrhosis mortality rates. Recent studies also suggest that alcohol and hepatitis C may exert a multiplicative effect on risk for cirrhosis and other liver disease.

Although ALD remains a major cause of death, important declines in ALD death rates have been observed in recent years. Undoubtedly these declines were caused in part by changes in alcohol consumption rates, but because the mortality rate decline began when consumption was still increasing, other factors appear to be involved as well. To date, the evidence indicates that increases in participation in AA and other treatment for alcohol abuse have played an important role in reducing cirrhosis mortality rates. Other research has suggested that cirrhosis mortality rates may be more closely related to consumption of certain alcoholic beverages—specifically spirits—than to total alcohol consumption, and that beverage–specific effects can account for the fact that cirrhosis rates appeared to decrease although consumption rates were increasing in the 1970s. Important differences in ALD rates in men and women and among different ethnic groups have been found as well.

Further research into these differences is likely to lead to improved prevention and treatment of alcohol–related liver disease.


Watch the video: Ρύθμιση Ψηφιακών Καναλιών (May 2022).