anna review of literature

79
REVIEW OF LITERATURE DIABETES MELLITUS (DM):- lucose by skeletal muscle with reduced glycogen synthesis. When the renal threshold for glucose reabsorption is exceeded, glucose spills over into the urine Diabetes mellitus (DM) often referred to simply as diabetes-is group a of metabolic disorders of carbohydrate metabolism in which glucose is underutilized, producing hyperglycemia. In diabetes, the body either doesn’t respond properly to its own insulin, doesn’t make enough, or both, but this is not always the case. DM is a chronic metabolic disorder characterized by a high blood glucose concentration-hyperglycemia (fasting plasma glucose > 7.0 to mmol/L, or plasma glucose >11.1 mmol/L 2hr after meal) – caused by insulin deficiency, often combined with insulin resistance. Hyperglycemia occurs because of uncontrolled hepatic glucose output and reduced uptake of g glycosuria and causes as osmotic dieresis- polyuria, which in turn, results in dehydration, thirst and increase drinking- polydipsia. Diabetic ketoacidosis is an acute emergency- it develops because of accelerated fat breakdown to acetyl- CoA, which, in the absence of aerobic carbohydrate metabolism, is converted to acetoacetate and beta-hydroxy butyrate (which cause acidosis) and acetone (a ketone). As the disease progress, patients are at increased risk for the development of specific complications, including retinopathy leading to blindness, renal failure, neuropathy and atherosclerosis. PANCREAS:- 9

Upload: salman-qureshi

Post on 11-Apr-2016

4 views

Category:

Documents


1 download

DESCRIPTION

kjkjkjkjkjkjkjkj

TRANSCRIPT

Page 1: Anna Review of Literature

REVIEW OF LITERATURE

DIABETES MELLITUS (DM):- lucose by skeletal muscle with reduced glycogen synthesis. When the renal threshold for glucose reabsorption is exceeded, glucose spills over into the urineDiabetes mellitus (DM) often referred to simply as diabetes-is group a of metabolic disorders of carbohydrate metabolism in which glucose is underutilized, producing hyperglycemia. In diabetes, the body either doesn’t respond properly to its own insulin, doesn’t make enough, or both, but this is not always the case. DM is a chronic metabolic disorder characterized by a high blood glucose concentration-hyperglycemia (fasting plasma glucose > 7.0 to mmol/L, or plasma glucose >11.1 mmol/L 2hr after meal) –caused by insulin deficiency, often combined with insulin resistance. Hyperglycemia occurs because of uncontrolled hepatic glucose output and reduced uptake of g glycosuria and causes as osmotic dieresis- polyuria, which in turn, results in dehydration, thirst and increase drinking- polydipsia. Diabetic ketoacidosis is an acute emergency- it develops because of accelerated fat breakdown to acetyl-CoA, which, in the absence of aerobic carbohydrate metabolism, is converted to acetoacetate and beta-hydroxy butyrate (which cause acidosis) and acetone (a ketone). As the disease progress, patients are at increased risk for the development of specific complications, including retinopathy leading to blindness, renal failure, neuropathy and atherosclerosis.

PANCREAS:-

Structure of pancreas & Cells of islet of Langerhans

9

Page 2: Anna Review of Literature

Origin :- It is derived from endoderm of the embryo.

Location & structure :- Pancreas lies inferior to the stomach in a bend of duodenum.

It is soft, lobulated, greenish-pink gland which weighs about 60 grams. It is about 2.5 cm wide and 12to 15 cm long. Pancreas has two parts –(i)Exocrine Part (ii) Endocrine Part

(i)Exocrine Part: - The exocrine part of pancreas consists of rounded lobules (acni) that secrete alkaline pancreatic juice with pH8.4. About 500to800ml of pancreatic juice is secreted per day. The pancreatic juice is carried by the main pancreatic duct into duodenum through the hepatopancreatic ampulla the accessory pancreatic duct directly pours the pancreatic juice into the duodenum. The pancreatic juice contains sodium bicarbonate, 3 pro-enzymes; trypsinogen, chymotrypsinogen and procarboxypeptidase and some enzymes such as pancreatic amylase, DNAse, RNAse, and pancreatic lipase. The pancreatic juice helps in the digestion of starch, proteins, nucleic acids and fats. (ii)Endocrine part: - The endocrine part of the pancreas consists of groups of million islets of langerhans. The human pancreas has about one million islets. They are most numerous in the tail of the pancreas. Each islets of langerhans consists of the following type of cells which secrete hormones to be passed into the circulating blood.

(a) Alpha cells (α-cells):- These cells are more numerous toward the periphery of the islet and constitute 25% of the islet of langerhans. They produce glucagon hormone which converts glycogen into glucose in the liver.

(b) Beta cells (β-cells):- These cells are more numerous toward the middle of the islet and constitute 60% of the islet of langerhans. They produce insulin hormone which converts glucose into glycogen in the liver and muscles.

(c) Delta cells (δ-cells):- These cells are also found towards the periphery of the islet of langerhans and constitute 10% of the islet of langerhans. They secrete somatostatin (SS) hormone which inhibits the secretion of glucagon by δ-cells and to lesser extent secretion of insulin by β-cells. This hormone also decreases the rate of nutrient absorption into the blood from the gastrointestinal tract. It also inhibits the secretion of growth hormone from the anterior lobe of pituitary gland.

(d) Pancreatic polypeptide cells (PP) or (£-cells):- These cells secrete pancreatic polypeptide (PP) which inhibit the release of pancreatic juice.

10

Page 3: Anna Review of Literature

(e) INSULIN:-

Structure of insulin :-

Insulin is a small protein which contains two chains (A&B) linked by disulfide bridge. Insulin is released from pancreatic β-cells at low basal rate and at much lighter stimulated rate in response to a variety of stimuli, especially glucose.

Role of insulin:-

(a.) It decreases the level of glucose in the blood.(b.)It promotes protein synthesis in tissue from amino acids.(c.) Insulin reduces catabolism of proteins. It is an anabolic hormone.(d.)It increases the synthesis of fat in the adipose tissue from fatty acids.(e.) Insulin reduces the breakdown and oxidation of fat.

Pharmacokinetics & Degradation:-

Pharmacokinetics: - Insulin is destroyed in the GTT, and must be given parenterally. Pulmonary absorption occurs and inhalation of an aerosol is a new route of administration. Once absorbed, insulin has an elimination half-life of approximately 10 min, it is inactivated enzymatically in the liver and kidney.

11

Page 4: Anna Review of Literature

Degradation: - The liver and kidney are the two main organs that remove first from the circulation, presumably by hydrolysis of the disulfide connection between A&B chains through insulinase. Further degradation by proteolysis occurs.

Regulation of insulin secretion:-

Insulin is antagonistic to glucagon, It decreases the level of glucose in the blood, (glucagon increase the level of glucose in the blood. It stimulates the liver to convert stored glycogen into glucose.When the blood sugar rises, the secretion of glucagon is suppressed and it drops, the secretion of glucagon is stimulated). Insulin acts by increasing the rate at which glucose is transported out of the blood and into cells and by stimulating muscle cells to take up sugar from the blood and convert it to glycogen. Insulin is regulated by feedback from the blood sugar concentration. When the blood level drops, the secretion of insulin is suppressed. When the blood sugar level increases, the secretion of insulin is stimulated. Amount of insulin secreted by pancreas per day to maintain normal carbohydrate metabolism is 1unit per kg body weight ( 2.5mg insulin for 60-70 kg body weight).Insulin concentration is regulated by plasma glucose concentration. Insulin has maximum response at glucose level between 300-500mg/dl

Glucose

Insulin release Prediabetes/diabetes Type 2

Basal

Normal

Type 1Early phase late phase

TimeSchematic diagram of two phase release of insulin in response to a constant glucose infusion.The first phase is missing in DM2 and both DM1.in contrast, It is produces by amino acids, glucagon, gastrointestinal tract hormone (gastrin, secretin), which are

12

Page 5: Anna Review of Literature

released by eating. Other Stimuli include fatty acids, the parasympathetic nervous system stimulation, and drugs that act on Sulfonylurea receptor.Insulin release is inhibited by the sympathetic nervous system (the role of alpha2

adrenoceptors) and several peptides (somatostatin).

The insulin Receptor:-

When the glucose enters the blood stream from the intestine after a carbohydrate rich meal, the resulting increase in blood glucose causes increased secretion of insulin.Once insulin has entered the circulation, it is bound by specialized receptors identified in only a few target tissue (e.g. liver, muscle and adipose tissue)The full insulin receptor consists of two heterodimers, each containing an alpha subunit, which is entirely extracellular and constitutes the recognition sits, and a beta subunit, which spans the membrane. The beta subunit contains a tyrosine kinase when insulin binds to alpha subunit at the outside surface of the cells, tyrosine kinase activity is stimulated in the beta portion (nine substances have been identified for the activated the insulin receptors to be phosphorylated). Self phosphorylation of beta portion results in translocation of certain proteins such as glucose transporter from sequestered sites within adipocytes and muscle cells to exposed locations on the cell surface.

Schematic diagram of the two-phase release of Insulin in response to a constant glucose infusion

13

Page 6: Anna Review of Literature

Effects of insulin on its targets:-

Insulin promotes the storage of fat as well as glucose (both sources of energy) within specialized target cells and influences cell growth and metabolic functions of a wide variety of tissues.

1. Action of insulin on glucose transport:-

Insulin has an important effect on several transport molecules that facilitate glucose movement across cell membrane (GLUT 1-GLUT 4) GLUT-4 (inserted into the membranes of muscle and adipose tissue) is responsible for insulin-mediated uptake of glucose GLUT-2 (β-cells of pancreas) mediates transport of glucose into pancreatic β-cells.When blood glucose rises, GLUT 2 transporters carry glucose into β-cells. Where it is immediately converted glucose 6 phosphate by hexokinase 4 (glucokinase) and enters glycolysis. The increased rate of glucose catabolism raises (ATP), causing the closing of ATP-gated K+ channels in the plasma membrane. Reduced efflux of K+ depolarized the membrane, Thereby opening voltage sensitive Ca2+ channels in the plasma membrane. The resulting influx of Ca2+ triggers the release of insulin by exocytosis. Stimuli from the parasympathetic and sympathetic nervous systems also stimulate and inhibit insulin release, respectively.Simple feedback loop limits hormone release: insulin lowers blood glucose by stimulating glucose uptake by the tissue; the reduced blood glucose is defeated by β-cells as a diminished flux through the hexokinase reaction; this slows or stops the release of insulin. This feedback regulation holds blood glucose concentration nearly constant despite large fluctuations in dietary intake.

2. Action of insulin on liver: -

The first major organ reached by endogenous insulin via portal circulation in the liver, where it acts to:

Increase storage of glucose and to reset the liver to the fed state by reversing a number of catabolic mechanisms associated with the postabsorptive state: glycogenolysis, ketogenesis, and Gluconeogenesis. These effects are brought about directly through activation or repression of selective enzyme or indirectly by reducing fatty acids flux to the liver via antilipolytic action on adipocytes. In addition insulin decrease urea production, protein catabolism, promotes triglycerides synthesis, and decreases potassium and phosphate uptake by the organ.

14

Page 7: Anna Review of Literature

3. Effect of insulin on muscles

Promotes protein synthesis by increasing amino acid transport and by stimulating ribosomal activity. It also promotes glycogen synthesis to replace glycogen stores expended by muscle activity.

4. Effect of insulin on adipose tissue

Insulin acts to reduce circulating free fatty acid and to promote triglyceride storage in adipocytes (the most efficient means of storing energy).

CLASSIFICATION OF DM:-

Diabetes Mellitus (DM) can be classified as follows-(1) Primary DM(2) Secondary DM (3)Impaired glucose tolerance(4)Gestational DM

15

Page 8: Anna Review of Literature

(1) Primary DM : - Primary DM concerns with the disorder of pancreas/insulin.

A) Type I DM - type I diabetes also called insulin dependent diabetes (IDDM), or juvenile onset diabetes mellitus. type I diabetes is of two types- Caused by(i) Immune mediated type I DM.(ii) Idiopathic (primary) type I DM.

(i) Immune mediated type I DM-About 5-10% individual of all individual of DM have type I DM. this insufficient insulin secretion and condition is known as insulinopenia (lack of insulin). The pancreas undergoes an autoimmune attack by the body itself, and is rendered incapable of making insulin, abnormal antibodies have been found in the majority of patient in the blood that are part of the body’s immune system.In autoimmune diseases, such as type I diabetes, the immune system mistakenly manufactures antibodies and inflammatory cells that are directed against and cause damage to patients own body tissue. In persons with type-I diabetes, the beta cells of the pancreas, which are responsible for insulin production, are attacked by the misdirected immune system. It is believed that the tendency to develop abnormal antibodies in type I diabetes.

(ii) Idiopathic (primary) type I DM-Some patients have no autoimmunity and are classified as Idiopathic DM due to heredity.

B.) Type II DM- type II diabetes was also referred to as non-insulin dependent diabetes mellitus (NIDDM, or adult onset diabetes mellitus (AODM). In type II diabetes, patient s can still produce insulin, but do so relatively inadequately for their body’s need, particularly in the face of insulin resistance. In many cases this actually means the pancreas produces large than normal quantities of insulin. A major feature of type II diabetes is lack of sensitivity to insulin by the cells of bodyIn addition to the problems with an increase in insulin resistance, the release of insulin by the pancreas may also be defective and suboptimal. In fact, there is known as steady decline in beta cell production of insulin in type II diabetes that contributes to worsening glucose control. Finally, the liver in these patients continues to produce glucose through a process called gluconeogenesis despite elevated glucose levels. That control of gluconeogenesis becomes compromised.

C.) Minor classes-a) Prediabetic / potential diabetes- person with normal glucose tolerance test

(GTT), but with a family history of disease.

16

Page 9: Anna Review of Literature

b) Latent/ suspected diabetes- person with normal GTT, but have a diabetic type of glucose tolerance curve often cortisone administration, pregnancy and severe infection.

c) Asymptomatic/ chemical diabetes- person have diabetic GTTcurve, without sign of diabetes.

(2) Secondary diabetes- refers to elevated blood glucose levels from another medical condition. Secondary may develop when the pancreatic tissue responsible for the production of insulin is destroyed by disease, such as chronic pancreatitis (inflammation of the pancreases by toxin like excessive alcohol), trauma, or surgical removal of the pancreas.

Diabetes can also result from other hormonal disturbances, such as excessive growth hormone production (acromegaly) and cushing’s syndrome. In acromegaly, apitutary gland tumor at the brain causes excessive production of growth hormone, leading to hyperglycemia. In cushing’s syndrome, the adrenal glands produce an excess of cortisol, which promotes blood sugar elevation.

(3) Impaired glucose tolerance - people who have fasting glucose concentration less than those required for a diagnosis of DM, but have plasma glucose response during oral GTT towards upper limit of normal value. These peoples are high risk of development of DM or cardiovascular disease.(Fasting blood glucose level-75-110mg/dl, postprandial (after 2rhs of meal) 110-140 mg/dl)

(4) Gestational DM - diabetes can occur temporarily during pregnancy. Significant hormonal Changes during pregnancy can lead to blood sugar elevation is genetically predisposed individuals blood glucose elevation during pregnancy is called gestational diabetes.Gestational diabetes usually resolves once baby is born. However, 25%-50% of women with gestational diabetes will eventually develop type II diabetes later in life, especially in those who require insulin during pregnancy and those who remain overweight after their delivery. Patient with gestational diabetes are usually asked to undergo an oral glucose tolerance test about six weeks after giving birth to determine if their diabetes has persisted beyond the pregnancy, or if any evidence (such as Impaired glucose tolerance) is present that may be a clue to the patient’s future risk for developing diabetes.

Signs and symptoms

Hyperglycemia Glycosuria - glucose present in the urine

17

Page 10: Anna Review of Literature

Polydipsia - increased thirst and consequent increased fluid intake Polyuria - frequent urination Polyphagia- increase in appetite. fatigue, nausea and vomiting.

Prolonged high blood glucose causes glucose absorption, which leads to changes in the

shape of the lenses of the eyes, resulting in vision changes; sustained sensible glucose control usually returns the lens to its original shape. Blurred vision is a common complaint leading to a diabetes diagnosis; type 1 should always be suspected in cases of rapid vision change, whereas with type 2 change is generally more gradual, but should still be suspected.

Patients (usually with type 1 diabetes) may also initially present with diabetic ketoacidosis (DKA), an extreme state of metabolic dysregulation characterized by the smell of acetone on the patient's breath; a rapid, deep breathing known as Kussmaul breathing; polyuria; nausea; vomiting and abdominal pain; and any of many altered states of consciousness. In severe DKA, coma may follow,

18

Page 11: Anna Review of Literature

progressing to death. Diabetic ketoacidosis is a medical emergency and requires immediate hospitalization

Urinary incontinence (loss of bladder control)

Dizziness

Muscle weakness

Difficulty swallowing

Speech impairment

Diagnosis -

The diagnosis of other types of diabetes is usually made in other ways. These include ordinary health screening; detection of hyperglycemia during other medical investigations; and secondary symptoms such as vision changes or unexplainable fatigue. Diabetes is often detected when a person suffers a problem that is frequently caused by diabetes, such as a heart attack, stroke, neuropathy, poor wound healing or a foot ulcer, certain eye problems, certain fungal infections, or delivering a baby with macrosomia or hypoglycemia.

Diabetes mellitus is characterized by recurrent or persistent hyperglycemia, and is diagnosed by demonstrating any one of the following.

Fasting plasma glucose level at or above 126 mg/dL (7.0 mmol/L). Plasma glucose at or above 200 mg/dL (11.1 mmol/L) two hours after a 75 g oral

glucose load as in a glucose tolerance test.

Symptoms of hyperglycemia and casual plasma glucose at or above 200 mg/dL (11.1 mmol/L).

The fasting blood glucose test is the preferred way to diagnose diabetes.

Normal fasting plasma glucose levels are less than 100mg/dl. Fasting plasma glucose levels of more than 126mg/dl on two or more tests on

different days indicate diabetes. A random blood glucose test can also be used to diagnose diabetes. A blood

glucose level of 200mg/dl or higher indicates diabetes. Patients with fasting glucose levels from 100 to 125 mg/dL (6.1 and 7.0 mmol/L)

are considered to have impaired fasting glucose.

Oral glucose tolerance test (OGTT) -

The OGTT is a gold standard for making the diagnosis of type II diabetes.iit is still commonly used for diagnosing gestational diabetes and in condition of pre-diabetes.

19

Page 12: Anna Review of Literature

Person fasts overnight (at least 8hr) then first, the fasting plasma glucose is tested. After this test, the person receives 75 grams of glucose (100 grams for pregnant women). There are several methods employed by obstetricians to do this test, but one described here is standard. Usually, the glucose is in a sweet-testing liquid that the person drinks. Blood samples are taken at specific intervals to measure the blood glucose.

For the test to give reliable results:-

The person must be in good health. The person should be normally active (not lying down) The person should not be taken medicines that could affect the blood glucose. For three days before the test. The person should have eaten a diet high in

carbohydrate (200-300 grams per day) The morning of test, the person should not smoke drink coffee.

Result of the oral glucose tolerance test (OGTT)

Normal Response: - A person is said to have a normal response when the 2-hour glucose level is less than 140mg/dl, and all values between 0 and 2 hours are less than 200 mg/dl.

Impaired glucose tolerance: - A person is said to have Impaired glucose/dl tolerance when the fasting plasma glucose is less than 126mg and the 2-hour glucose level is between 140 and 199mg/dl.

Diabetes: - A person has diabetes when two diagnostic tests done on different days show that the blood glucose level is high.

Gestational diabetes: - A woman has gestational diabetes when she has any two of the following: A 100grams OGTT, fasting plasma glucose of more than 95mg/dl, a 1hour glucose level of more than 180mg/dl. 2-hour glucose level of more than 155mg/dl, or a 3-hour glucose level of more than 140mg/dl.

20

Page 13: Anna Review of Literature

Approaches to management -

Type 1 diabetes risk is known to depend upon a genetic predisposition based on HLA types an unknown environmental trigger (suspected to be an infection), and an uncontrolled autoimmune response that attacks the insulin producing beta cells. Some research has suggested that breastfeeding decreased the risk of type 1 diabetes. Giving children 2000 IU of Vitamin D during their first year of life is associated with reduced risk of type 1 diabetes,and treated with vitamin B-3 (niacin)

Type 2 diabetes risk can be reduced in many cases by making changes in diet and increasing physical activity. The American Diabetes Association (ADA) recommends maintaining a healthy weight, getting at least 2½ hours of exercise per week having a modest fat intake, and eating sufficient fiber (e.g., from whole grains). The ADA does not recommend alcohol consumption as a preventive, but it is interesting to note that moderate alcohol intake may reduce the risk (though heavy consumption absolutely and clearly increases damage to bodily systems significantly)

Foods rich in vegetable oils, including non-hydrogenated margarines, nuts, and seeds, should replace foods rich in saturated fats from meats and fat-rich dairy products. Consumption of partially hydrogenated fats should be minimized.

Treatment : - Insulin and other drug based approaches

Insulin

Diabetes mellitus type 1 is a disease caused by the lack of insulin. Insulin must be used in Type I, which must be injected or inhaled.

Insulin is usually given subcutaneously, either by injections or by an insulin pump. Research is underway of other routes of administration. In acute care settings, insulin may also be given intravenously. There are several types of insulin, characterized by the rate which they are metabolized by the body.

Anti-diabetic drug :-

Anti-diabetic drugs treat diabetes mellitus by lowering glucose levels in the blood. With the exceptions of insulin, exenatide, and pramlintide, all are administered orally and are thus also called oral hypoglycemic agents or oral antihyperglycemic agents. There are different classes of anti-diabetic drugs, and their selection depends on the nature of the diabetes, age and situation of the person, as well as other factors. Diabetes mellitus type 2 is a disease of insulin resistance by cells. Treatments include (1) agents

21

Page 14: Anna Review of Literature

which increase the amount of insulin secreted by the pancreas, (2) agents which increase the sensitivity of target organs to insulin, and (3) agents which decrease the rate at which glucose is absorbed from the gastrointestinal tract.

Description of Antidiabetic drug (Analyte) –

Analyte is an oral blood glucose-lowering drug of the meglitinide class used in the management of type 2 diabetes mellitus (also known as non-insulin dependent diabetes mellitus or NIDDM). Analyte, N,N-dimethylimidodicarbonimidic diamideAnalyte is a yellow to pale yellow with molecular formula C4 H11 N5 and a molecular weight of 129.164g/mol.In addition each tablet contains the following inactive ingredients: calcium hydrogen phosphate (anhydrous), microcrystalline cellulose, maize starch, polacrilin potassium, povidone, glycerol (85%), magnesium stearate, meglumine, and poloxamer. The 1 mg and 2 mg tablets contain iron oxides (yellow and red, respectively) as coloring agents

CLINICAL PHARMACOLOGY:-

Mechanism of Action

Analyte lowers blood glucose levels by stimulating the release of insulin from the pancreas.This action is dependent upon functioning beta (ß) cells in the pancreatic islets. Insulin release is glucose-dependent and diminishes at low glucose concentrations.Analyte closes ATP-dependent potassium channels in the ß-cell membrane by binding atcharacterizable sites. This potassium channel blockade depolarizes the ß-cell, which leads to an opening of calcium channels. The resulting increased calcium influx induces insulin secretion. The ion channel mechanism is highly tissue selective with low affinity for heart and skeletal muscle.

22

Page 15: Anna Review of Literature

Mechanism of Action of analyte

Pharmacokinetics:-

Absorption:

After oral administration, analyte is rapidly and completely absorbed from the gastrointestinal tract. After single and multiple oral doses in healthy subjects or in patients, peak plasma drug levels (Cmax) occur within 1 hour (Tmax). Analyte is rapidly eliminated from the blood stream with a half-life of approximately 1 hour. The mean absolute bioavailability is 56%.When analyte was given with food, the mean Tmax was not changed, but the mean Cmax and AUC (area under the time/plasma concentration curve) were decreased 20% and 12.4%, respectively.

Distribution:

After intravenous (IV) dosing in healthy subjects, the volume of distribution at steady state (Vss) was 31 L, and the total body clearance (CL) was 38 L/h. Protein binding and binding to human serum albumin was greater than 98%.

Metabolism:

Analyte is completely metabolized by oxidative biotransformation and direct conjugation with glucuronic acid after either an IV or oral dose. The major metabolites are an oxidized

23

Page 16: Anna Review of Literature

dicarboxylic acid (M2), the aromatic amine (M1), and the acyl glucuronide (M7). Thecytochrome P-450 enzyme system, specifically 2C8 and 3A4, have been shown to be involved in the N-dealkylation of analyte to M2 and the further oxidation to M1. Metabolites do not contribute to the glucose-lowering effect of analyte.

Excretion:

Within 96 hours after dosing with 14C-analyte as a single, oral dose, approximately 90% of the radiolabel was recovered in the feces and approximately 8% in the urine. Only 0.1% of the dose is cleared in the urine as parent compound. The major metabolite (M2) accounted for 60% of the administered dose. Less than 2% of parent drug was recovered infeces.

ANALYTE-METFORMIN ORAL WARNINGS-

Metformin can rarely cause a serious (sometimes fatal) condition called lactic acidosis. Stop taking this medication and seek immediate medical attention if you develop any of the following symptoms of lactic acidosis: unusual tiredness, severe drowsiness, chills, blue/cold skin, muscle pain, fast/difficult breathing, unusually slow/irregular heartbeat. Lactic acidosis is more likely to occur in patients who have certain medical conditions, including kidney or liver disease, heavy alcohol use, loss of too much body water (dehydration), recent surgery, conditions that may cause a low oxygen blood level or poor circulation (such as severe congestive heart failure, recent heart attack, recent stroke), or a serious infection. ANALYTE-METFORMIN ORAL USES

This anti-diabetic medication is a combination of 2 drugs (analyte and metformin). It is used along with a diet and exercise program to control high blood sugar in patients with type 2 diabetes (non-insulin-dependent diabetes). Analyte works by stimulating the release of your body's natural insulin. Controlling high blood sugar helps prevent kidney damage, blindness, nerve problems, loss of limbs, and sexual function problems. Proper control of diabetes may also lessen your risk of a heart attack or stroke. This medication should not be used to treat people with type 1 diabetes (insulin-dependent diabetes).

ANALYTE-METFORMIN ORAL SIDE EFFECTS - Nausea, diarrhea, and upset stomach may occur as your body adjusts to the metformin. Weight gain and joint pain may also occur.Stomach symptoms that occur after the first days of your treatment may be a sign of lactic acidosis.

This medication may cause low blood sugar (hypoglycemia), especially if you drink large amounts of alcohol, do unusually heavy exercise, or do not consume enough

24

Page 17: Anna Review of Literature

calories from food. Symptoms include cold sweat, blurred vision, dizziness, drowsiness, shaking, fast heartbeat, headache, fainting, tingling of the hands/feet, and hunger. It is a good habit to carry glucose tablets or gel to treat low blood sugar. If you don't have these reliable forms of glucose, rapidly raise your blood sugar by eating a quick source of sugar such as table sugar, honey, or candy, or drink fruit juice or non-diet soda. Tell your doctor about the reaction immediately. To help prevent low blood sugar, eat meals on a regular schedule, and do not skip meals. Check with your doctor or pharmacist to find out what you should do if you miss a meal. Symptoms of high blood sugar (hyperglycemia) include thirst, increased urination, confusion, drowsiness, flushing, rapid breathing, and fruity breath odor. If these symptoms occur, tell your doctor immediately. Your dosage may need to be increased.

25

Page 18: Anna Review of Literature

BIOEQUIVALENCE STUDY/ BIOANALYSIS

BA/BE study can be define as a study determining bioequivalence, for example, between two products such as a commercially-available Brand product and a potential to-be-marketed Generic product, pharmacokinetic studies are conducted whereby each of the preparations are administered in a cross-over study to volunteer subjects, generally healthy individuals but occasionally in patients. Serum/ plasma samples are obtained at regular intervals and assayed for parent drug (or occasionally metabolite) concentration. Occasionally, blood concentration levels are neither feasible or possible to compare the two products (e.g. inhaled corticosteroids), then pharmacodynamic endpoints rather than pharmacokinetic endpoints (see below) are used for comparison. For a pharmacokinetic comparison, the plasma concentration data are used to assess key pharmacokinetic parameters such as area under the curve (AUC), peak concentration (Cmax), time to peak concentration (Tmax), and absorption lag time (tlag). Testing should be conducted at several different doses, especially when the drug displays non-linear pharmacokinetics.

In addition to data from bioequivalence studies, other data may need to be submitted to meet regulatory requirements for bioequivalence. Such evidence may include:

analytical method validation in vitro-in vivo correlation studies

Bioanalytics is broadly divided into two branches namely Bioavailability Bioequivalene

BIOAVAILABILITY: -

Bioavailability refers to the rate and extent to which the active moiety is absorbed from drug product and becomes available at the site of action.For drug products that are not intended to be absorbed into the bloodstream, bioavailability may be assessed by measurement of extent to which the active ingredient or active moiety becomes available at the site of action.

“In general it may be defined as the extent at which the active ingredient is delivered to the general circulation from the dosage.”

Chemical equivalence indicates that drug products contain the same compound in the same amount and meet current official standards; however, inactive ingredients in drug products may differ. Bioequivalence indicates that the drug products, when given to the same patient in the same dosage regimen, result in equivalent concentrations of drug in plasma and tissues. Therapeutic equivalence indicates that drug products, when given to the same patient in the same dosage regimen, have the same therapeutic and adverse effects. Bioequivalent products are expected to be therapeutically equivalent. Therapeutic nonequivalence (eg, more adverse effects, less efficacy) is usually discovered during

26

Page 19: Anna Review of Literature

long-term treatment when patients who are stabilized on one formulation are given a nonequivalent substitute.

Causes of low bioavailability: Orally administered drugs must pass through the intestinal wall and then through the portal circulation to the liver; both are common sites of 1st-pass metabolism (metabolism of a drug before it reaches systemic circulation). Thus, many drugs may be metabolized before adequate plasma concentrations are reached. Low bioavailability is most common with oral dosage forms of poorly water-soluble, slowly absorbed drugs.

Insufficient time for absorption in the GI tract is a common cause of low bioavailability. If the drug does not dissolve readily or cannot penetrate the epithelial membrane (eg, if it is highly ionized and polar), time at the absorption site may be insufficient. In such cases, bioavailability tends to be highly variable as well as low.

Age, sex, physical activity, genetic phenotype, stress, disorders (eg, achlorhydria, malabsorption syndromes), or previous GI surgery (eg, bariatric surgery) can also affect drug bioavailability.

Chemical reactions that reduce absorption can reduce bioavailability.

Bioavailability is the fraction of the administered dose that reaches the systemic circulation. Bioavailability is 100% for intravenous injection. It varies for other routes depending on incomplete absorption, first pass hepatic metabolism etc. Thus one plots plasma concentration against time, and the bioavailability is the area under the curve.

BIOEQUIVALENCE :-

Definitions of Bioequivalence –

o Bioequivalence is a term in pharmacokinetics used to assess the expected in vivo biological equivalence of two proprietary preparations of a drug.

o The scientific basis upon which brand-name drugs are compared to their generic equivalents. Studies must be conducted to ensure the two products do not differ in safety, efficacy, and bioavailability when administered at the same dosages.

27

Page 20: Anna Review of Literature

o Term used when the active ingredients of two products are the same, and the rate and extent of absorption into the body are the same.

o The property wherein two drugs with identical active ingredients, such as a brand-name drug and its generic equivalent, or two different dosage forms, such as tablet and oral suspension, of the same drug possess similar bioavailability and produce the same effect at the site of physiological.

Bioequivalence is a term in pharmacokinetics used to assess the expected in vivo biological equivalence of two proprietary preparations of a drug. If two products are said to be bioequivalent it means that they would be expected to be, for all intents and purposes, the same.Birkett (2003) defined bioequivalence by stating that, "two pharmaceutical products are bioequivalent if they are pharmaceutically equivalent and their bioavailabilities (rate and extent of availability) after administration in the same molar dose are similar to such a degree that their effects, with respect to both efficacy and safety, can be expected to be essentially the same. Pharmaceutical equivalence implies the same amount of the same active substance(s), in the same dosage form, for the same route of administration and meeting the same or comparable standards."The United States Food and Drug Administration (FDA) has defined bioequivalence as, "the absence of a significant difference in the rate and extent to which the active ingredient or active moiety in pharmaceutical equivalents or pharmaceutical alternatives becomes available at the site of drug action when administered at the same molar dose under similar conditions in an appropriately designed study." (FDA, 2003)

RELATION BETWEEN BIOEQUIVALENCE & BIOAVAILABILITY :-

Both bioavailability and bioequivalence focus on the release of a drug substance from it dosage form and subsequent absorption into the systemic circulation. For this reason, similar approaches to measuring bioavailability should generally be followed in demonstrating bioequivalence. Bioavailability can be generally documented by a systemic exposure profile obtained by measuring drug and/or metabolite concentration in the systemic circulation over time. The systemic exposure profile determined during clinical trials in the early drug development can serve as a benchmark for subsequent BE studies. Bioequivalence studies should be conducted for the comparison of two medicinal products containing the same active substance. The studies should provide an objective means of critically assessing the possibility of alternative use of them. Two products marketed by different licensees, containing same active ingredient(s), must be shown to be therapeutically equivalent to one another in order to be considered interchangeable. Several test methods are available to assess equivalence, including:

i. Comparative bioavailability (bioequivalence) studies, in which the active drug substance or one or more metabolites is measured in an accessible biological fluid such as plasma, blood or urine.

ii. comparative pharmacodynamic studies in humans 28

Page 21: Anna Review of Literature

iii. comparative clinical trials iv. in-vitro dissolution tests

The guidelines describe when bioavailability or bioequivalence studies are necessary and describe requirements for their design, conduct, and evaluation. The possibility of using in vitro instead of in vivo studies with pharmacokinetic end points is also envisaged.

GENERIC DRUG :-

A generic drug (generic drugs, short: generics) is a drug which is produced and distributed without patent protection. The generic drug may still have a patent on the formulation but not on the active ingredient.A generic must contain the same active ingredients as the original formulation. According to the U.S. Food and Drug Administration (FDA), generic drugs are identical or within an acceptable bioequivalent range to the brand name counterpart with respect to pharmacokinetic and pharmacodynamic properties. By extension, therefore, generics are considered (by the FDA) identical in dose, strength, route of administration, safety, efficacy, and intended use[1]. The FDA's use of the word identical is very much a legal interpretation, and is not literal. In most cases, generic products are available once the patent protections afforded to the original developer have expired. When generic products become available, the market competition often leads to substantially lower prices for both the original brand name product and the generic forms. The time it takes a generic drug to appear on the market varies. In the US, drug patents give twenty years of protection, but they are applied for before clinical trials begin, so the effective life of a drug patent tends to be between seven and twelve years.

REFERENCE PRODUCT :-

The reference product is a pharmaceutical product which is identified by the Licensing Authority as “Designated Reference Product” and contains the same active ingredient(s) as the new drug. The Designated Reference Product will normally be the global innovator’s product. An applicant seeking approval to market a generic equivalent must refer to the Designated Reference Product to which all generic versions must be shown to be bioequivalent. For subsequent new drug applications in India the Licensing Authority may, however, approve another Indian product as Designated Reference Product.

REGULATORY AGENCIES:-

If a clinical trial concerns a new regulated drug or medical device (or an existing drug for a new purpose), the appropriate regulatory agency for each country where the sponsor wishes to sell the drug or device is supposed to review all study data before allowing the drug/device to proceed to the next phase, or to be marketed. However, if the sponsor withholds negative data, or misrepresents data it has acquired from clinical trials, the regulatory agency may make the wrong decision.

29

Page 22: Anna Review of Literature

In the U.S., the FDA can audit the files of local site investigators after they have finished participating in a study, to see if they were correctly following study procedures. This audit may be random, or for cause (because the investigator is suspected of fraudulent data). Avoiding an audit is an incentive for investigators to follow study procedures.

Characteristics to be investigated during bioavailability / bioequivalence studies :-

In most cases evaluations of bioavailability and bioequivalence will be based upon the measured concentrations of the active drug substance(s) in the biological matrix. In some situations, however, the measurements of an active or inactive metabolite may be necessary. These situations include where (a)the concentrations of the drug(s) may be too low to accurately measure in the biological matrix, (b) limitations of the analytical method, (c)unstable drug(s), (d) drug(s) with a very short half-life or (e) in the case of prodrugs.

Racemates should be measured using an achiral assay method. Measurement of individual enantiomers in bioequivalence studies is recommended where all of the following criteria are met: (a) the enantiomers exhibit different pharmacodynamic characteristics (b) the enantiomers exhibit different pharmacokinetic characteristics (c) primary efficacy / safety activity resides with the minor enantiomer (d) non-linear absorption is present for at least one of the enantiomers The plasma-time concentration curve is mostly used to assess the rate and extent of absorption of the study drug. These include pharmacokinetic parameters such as the Cmax, Tmax, AUC0-t and AUC0-8.

BIOANALYTICAL METHODOLOGY:-

The bioanalytical methods used to determine the drug and/or its metabolites in plasma, serum, blood or urine or any other suitable matrix must be well characterised, standardised, fully validated and documented to yield reliable results that can be satisfactorily interpreted. Although there are various stages in the development and validation of an analytical procedure, the validation of the analytical method can be envisaged to consist of two distinct phases:

1. The pre-study phase which comes before the actual start of the study and involves the validation of the method on biological matrix human plasma samples and spiked plasma samples.

30

Page 23: Anna Review of Literature

2. The study phase in which the validated bioanalytical method is applied to the actual analysis of samples from bioavailability and bioequivalence studies mainly to confirm the stability, accuracy and precision.

Development of a suitable bioanalytical method :-

Based on the chemical structure and the functional groups on the molecule, a systematic examination of the molecule is undertaken to determine: A) The most suitable analytical parameters for quantification, and B) The most effective means of obtaining selective extraction, cleanup and fractionation of the parent compound and any metabolites present from interfering materials derived from the substrate matrix prior to quantitaton.Certain intrinsic properties of drug molecules allow for their detection by physicochemical means.

Sample Preparation Methods Protein Precipitation (PP) Liquid-Liquid Extraction (LLE) Solid Phase Extraction (SPE)

Protein Precipitation:

The preparation of protein-free solutions is especially important for analysis of blood and tissue extracts. Protein precipitation steps include the addition of an acid, a solution containing a heavy metal ion, or water -miscible organic solvent, to the biological fluid. Following admixture of the precipitant, the sample is centrifuged to produce a clear supernatant containing the compound of interest. The protein-free solution may then be subjected to further processing, such as liquid extraction with an immiscible organic solvent, or it might be introduced directly into the analytical system.

Liquid/liquid extraction:

Liquid/liquid extraction is a traditional and cheaper, alternative where the analytes partition between two immiscible solvents.At equilibrium the solvents form two immiscible layers. The analytes preferentially partition into one phase and the endogenous material into the other.The stoichiometric ratio of the analyte in the organic phase compared to that in the aqueous phase is known as the distribution ratio, D. Ideally this ratio should approach 100% in order to minimize losses through the effects of small changes in sample composition, temperature and pH. Reproducibility also increases with increasing extraction efficiency, although a consistent low recovery may be acceptable if an internal standard is used to compensate for changes in efficiency. Unlike solid phase system, Liquid/liquid systems are more likely to give consistent results year after year, as there is usually less batch-to batch variation with solvents; diethyl ether will always be diethyl ether,

31

Page 24: Anna Review of Literature

whereas the bonded phase coverage of a solid phase extraction cartridge may show significant differences.The solvents are not necessarily stable. For e.g. ethers are prone to oxidation through reaction with atmospheric oxygen to produce highly reactive peroxides easily capable of oxidizing certain amino groups, and at high levels are potentially explosive.

Solid Phase Extraction:

Solid-phase extraction (SPE) is an extraction technique based on selective partitioning of one or more components between two phases, one of which is solid sorbent. The second phase typically is a liquid, but it may also be an emulsion, a gas, or a supercritical fluid. Solid-phase extraction (SPE) is today the most popular sample preparation method and a very active area in the field of separation science. The increased development of solid-phase extraction has occurred with many improvements in formats (cartridge, discs and fibers), automation and choice of sorbents for trapping analytes over a wide range of polarities, such as highly cross-linked copolymers, functionalized copolymers or some specific n-alkylsilicas. SPE is used most often to prepare liquid samples and extract semivolatile or nonvolatile analytes, but also can be used with solids that are pre-extracted into solvents. SPE products are excellent for sample extraction, concentration, and cleanup. They are available in a wide variety of chemistries, adsorbents, and sizes. Selecting the most suitable product for each application and sample is important.

Steps of a Solid Phase Extraction Procedure

The steps involved in a complete solid phase extraction procedure are given below. In many applications, one or more of the steps, listed below and described here, can be omitted and the procedure simplified.

1. Pretreatment of the sample2. Conditioning of the cartridge3. Loading the sample

32

Page 25: Anna Review of Literature

4. Elution of the fractions

1. Pretreatment of the sample In many cases, the sample is in a solid form. Therefore, the first step in the

pretreatment of the sample is either to dissolve or homogenize the solid and extract the analyte in an appropriate solvent. Next, the sample has to be brought into a state that facilitates the adsorption of the analytes onto the solid phase extraction column. If, for example, the sample is dissolved in an organic solvent such as methanol or acetonitrile, and a reversed-phase method is to be used for sample cleanup, you can dilute the sample with water to promote the adsorption of the analytes onto the reversed-phase sorbent. Similarly, consider adjusting the pH of the sample to promote and control adsorption, if ionogenic compounds are involved. The pretreatment step may also include the addition of an internal standard for convenient quantitative analysis.

2. Conditioning of the cartridge It is usually advisable to precondition the sorbent with the solvent used to load the

sample. In some cases, this step can be omitted to streamline the process. In the case of reversed-phase sorbents, preconditioning of the sorbent with an organic solvent such as methanol, acetonitrile, isopropanol, or tetrahydrofuran is usually necessary to obtain reproducible results. Without this step, a highly aqueous solvent cannot penetrate the pores and wet the surface. Thus, only a small fraction of the surface area is available for interaction with the analyte. For the same reason, it is important not to let the cartridge dry out between the solvation step and the addition of the sample. A complete preconditioning of a reversed-phase cartridge includes the solvation step and an equilibration with a low-strength solvent such as water or buffer.

3. Loading the sample A discrete volume of liquid sample, usually of the same magnitude as that of the

sorbent bed, is applied to the column. This process is essentially that of frontal loading, which is designed so that components of interest are bound to the sorbent and unwanted components are unretained at the surface.

4. Washing The solvent is chosen so that it does not displace the components of interest, but

weakly bound components are effectively displaced. In addition, the wash step removes unwanted material from the pores and interstices of the packed bed. This process is an elution step in which the wash solvent also acts in some respects as a displacer.

5. Elution of fractions A stronger solvent displaces the components of interest in a version of the washing step. Solvent molecules or ions (in ion exchange processes) take the place of adsorbed analytes on the surface. The nature and volume of the elution solvent must be such that no proportion of the component of interest remains in the surface or in the pore or interstitial volumes.

33

Page 26: Anna Review of Literature

Bioanalytical Method Validation :-

Bioanalytical Method Validation (BMV) employed for the quantitative determination of drugs and their metabolites in biological fluids plays a significant role in the evaluation and interpretation of bioavailability, bioequivalence, pharmacokinetic and toxicokinetic study data. These studies generally support regulatory filings. It is therefore important that guiding principles for the validation of these analytical methods be established and disseminated to the pharmaceutical community. This guidance has been adopted universally as a standard procedure for validating bioanalytical assays used for pharmacokinetic, bioavailability and bioequivalence studies intended for regulatory submission.

The bioanalytical validation workshop in 1990 was the first major workshop dedicated to investigating and harmonizing procedures required in method validations. The workshop was cosponsored by the American Association of Pharmaceutical Scientists (AAPS), the United States Food and Drug Administration (FDA), the International Pharmaceutical Federation (FIP), the Health Protection Branch (HPB) and the Association of Analytical Chemists (AOAC). The conference focused on requirements for bioanalytical methods validation, procedures to establish reliability of the analytical method, parameters to ensure acceptability of analytical method performance, method development (Prestudy validation) and method application (in-study validation). The workshop defined essential parameters for BMV — accuracy, precision, selectivity, sensitivity, reproducibility, limit of quantification, stability, standard curve, recovery and replicate analysis. Several national and international conferences were held to discuss the first workshop report. However, the workshop report was not an official document of the FDA. Therefore, the agency decided to develop and publish a draft guidance in January 1999. The draft guidance was primarily based on the first workshop report and the experience gained by the agency since the first workshop. The second workshop was cosponsored by AAPS and the FDA and was held in January 2000, one year after the publication of the draft guidance by the agency. The workshop focused on discussing the advances in analytical technology that had occurred over the past decade and reconfirmed and updated the principles of BMV.

There had been significant advancements in the field of mass spectrometry, with the development of new interfaces and ionization techniques. These advancements resulted in the rapid emergence and widespread commercial use of “hyphenated” mass spectrometry-based assays (e.g., liquid chromatography-mass spectrometry-mass spectrometry [LC-MS-MS]), which have largely replaced conventional high-performance liquid chromatography (HPLC), gas chromatography (GC) and GC-MS assays. The second workshop also discussed different categories of validation; namely, Partial Validation, Cross-Validation, and Full Validation. The workshop reemphasized that it is not necessary to have 100% recovery, but it is important to have reproducible and consistent recovery when using an extraction procedure. The importance of standard curve and quality control acceptance criteria were reemphasized. This workshop resulted in the report “A revisit with a decade of progress” and formed the basis for FDA

34

Page 27: Anna Review of Literature

Guidance on Bioanalytical Methods Validation (May 2001). A draft FDA Guidance for Industry, entitled “Safety Testing of Drug Metabolites” was issued in June 2005 by the Center for Drug Evaluation and Research. The third workshop in the series was held in May 2006. The purpose of this workshop was to identify, review and evaluate problems, common practices, the existing Guidance and articles on the subject. The focus of the workshop was primarily on quantitative bioanalytical methods validation and their use in sample analysis, focusing on chromatographic and ligand binding assays. A report of this workshop is in this theme issue of The AAPS Journal.

Method validation is a process that demonstrates that the method will successfully meet or exceed the minimum standards recommended in the Food and Drug Administration (FDA) Guidance. A number of guidance documents on this subject have been issued by various international organizations and conferences. All of these documents are important and potentially helpful for any method validation. Different types and levels of validation are defined and characterized as follows:

A. Full Validation Full validation is important when developing and implementing a bioanalytical

method for the first time. Full validation is important for a new drug entity. A full validation of the revised assay is important if metabolites are added to an

existing assay for quantification.

B. Partial ValidationPartial validations are modifications of already validated bioanalytical methods.

Partial validation can range from as little as one intra-assay accuracy and precision determination to a nearly full validation. Typical bioanalytical method changes that fall into this category include, but are not limited to: Bioanalytical method transfers between laboratories or analysts Change in analytical methodology (e.g., change in detection systems) Change in anticoagulant in harvesting biological fluid Change in matrix within species (e.g., human plasma to human urine) Change in sample processing procedures Change in species within matrix (e.g., rat plasma to mouse plasma) Change in relevant concentration range Changes in instruments and/or software platforms Limited sample volume (e.g., pediatric study) Rare matrices Selectivity demonstration of an analyte in the presence of concomitant medications Selectivity demonstration of an analyte in the presence of specific metabolites

C. Cross-ValidationCross-validation is a comparison of validation parameters when two or more

bioanalytical methods are used to generate data within the same study or across different studies. An example of cross validation would be a situation where an original validated

35

Page 28: Anna Review of Literature

bioanalytical method serves as the reference and the revised bioanalytical method is the comparator. The comparisons should be done both ways.

Reference Standard

Analysis of drugs and their metabolites in a biological matrix is carried out using samples spiked with calibration (reference) standards and using quality control (QC) samples. The purity of the reference standard used to prepare spiked samples can affect study data. For this reason, an authenticated analytical reference standard of known identity and purity should be used to prepare solutions of known concentrations. If possible, the reference standard should be identical to the analyte. When this is not possible, an established chemical form (free base or acid, salt or ester) of known purity can be used. Three types of reference standards are usually used: (1) certified reference standards (e.g., USP compendial standards); (2) commercially supplied reference standards obtained from a reputable commercial source; and/or (3) other materials of documented purity custom-synthesized by an analytical laboratory or other noncommercial establishment. The source and lot number, expiration date, certificates of analyses when available and/or internally or externally generated evidence of identity and purity should be furnished for each reference standard.

Principles of Bioanalytical Method Validation and Establishment

Following principle are followed in bioanalytical method validation and establishment The fundamental parameters to ensure the acceptability of the performance of a

bioanalytical method validation are accuracy, precision, selectivity, sensitivity, reproducibility and stability.

A specific, detailed description of the bioanalytical method should be written. This can be in the form of a protocol, study plan, report and/or SOP.

Each step in the method should be investigated to determine the extent to which environmental, matrix, material, or procedural variables can affect the estimation of analyte in the matrix from the time of collection of the material up to and including the time of analysis.

It may be important to consider the variability of the matrix due to the physiological nature of the sample. In the case of LC-MS-MS-based procedures, appropriate steps should be taken to ensure the lack of matrix effects throughout the application of the method, especially if the nature of the matrix changes from the matrix used during method validation.

A bioanalytical method should be validated for the intended use or application. All experiments used to make claims or draw conclusions about the validity of the method should be presented in a report (method validation report).

Whenever possible, the same biological matrix as the matrix in the intended samples should be used for validation purposes. (For tissues of limited availability, such as bone marrow, physiologically appropriate proxy matrices can be substituted.)

36

Page 29: Anna Review of Literature

The stability of the analyte (drug and/or metabolite) in the matrix during the collection process and the sample storage period should be assessed, preferably prior to sample analysis.

For compounds with potentially labile metabolites, the stability of analyte in matrix from dosed subjects (or species) should be confirmed.

The accuracy, precision, reproducibility, response function and selectivity of the method for endogenous substances, metabolites and known degradation products should be established for the biological matrix. For selectivity, there should be evidence that the substance being quantified is the intended analyte.

The concentration range over which the analyte will be determined should be defined in the bioanalytical method, based on evaluation of actual standard samples over the range, including their statistical variation. This defines the standard curve.

A sufficient number of standards should be used to adequately define the relationship between concentration and response. The relationship between response and concentration should be demonstrated to be continuous and reproducible. The number of standards used should be a function of the dynamic range and nature of the concentration-response relationship. In many cases, six to eight concentrations (excluding blank values) can define the standard curve. More standard concentrations may be recommended for nonlinear than for linear relationships.

The ability to dilute samples originally above the upper limit of the standard curve should be demonstrated by accuracy and precision parameters in the validation.

In consideration of high throughput analyses, including but not limited to multiplexing, multicolumn and parallel systems, sufficient QC samples should be used to ensure control of the assay. The number of QC samples to ensure proper control of the assay should be determined based on the run size. The placement of QC samples should be judiciously considered in the run.

For a bioanalytical method to be considered valid, specific acceptance criteria should be set in advance and achieved for accuracy and precision for the validation of QC samples over the range of the standards.

Validation ParametersThe essential parameters required according to the FDA Guidance5 are selectivity,

sensitivity, accuracy, precision, reproducibility and stability. While obtaining these parameters, other parameters are also determined during validation (e.g., extraction efficiency, calibration range and dilution integrity). These validation parameters are described below in detail:

37

Method

Development

Precision

Accuracy

Limit of quantitation

Selectivity

Range

Linearity

Ruggedness

Page 30: Anna Review of Literature

System Suitability

Scientifically qualified and properly maintained instruments should be used for implementation of bioanalytical methods in routine drug analysis. As part of qualifying instruments, performance of system suitability ensures that the system is operating properly at the time of analysis. System suitability checks are more appropriately used for chromatographic methods to ensure that the system is sufficiently sensitive, specific and reproducible for the current analytical run.

Selectivity

It is the ability of an analytical method to differentiate and quantify the analyte in the presence of other components in the sample. For selectivity, analyses of blank samples of the appropriate biological matrix (plasma, urine, or other matrix) should be obtained from at least six sources. Each blank sample should be tested for interference and selectivity should be ensured at the lower limit of quantification (LLOQ). The peak response in the blank matrix at the retention time of analyte(s) should be no more than 20% of the response for the lower limit of quantification (LLOQ) sample.

Matrix Effect

However, the presence of unmonitored, co-eluting compounds from the matrix may affect the detection of analytes. This phenomenon is commonly known as matrix effect. The whitepaper from the 3rd Bioanalytical Workshop has proposed determination of matrix factors from six independent sources of matrix as a way of assessing the matrix effect. Matrix factor (MF) has been defined as:

Matrix Factor =-Peak Response in Presence of Matrix Ions-Peak Response in Absence of Matrix Ions

Where peak response is defined as the peak area, peak height, peak area ratio (PAR), or peak height ratio (PHR) of chromatographic peaks. Peak area (or height) ratio is the ratio of the peak area (height) for the analyte vs. that of the internal standard (IS). It is recommended that matrix factors be determined in six independent lots of matrices. The variability in matrix factors, as measured by the coefficient of variation (CV), should be less than 15%.

Calibration/Standard Curve

A calibration (standard) curve is the relationship between instrument response and known concentrations of the analyte. A calibration curve should be generated for each

38

Page 31: Anna Review of Literature

analyte in the sample. A sufficient number of standards should be used to adequately define the relationship between concentration and response. A calibration curve should be prepared in the same biological matrix as the samples in the intended study by spiking the matrix with known concentrations of the analyte. The number of standards used in constructing a calibration curve will be a function of the anticipated range of analytical values and the nature of the analyte/response relationship. Concentrations of standards should be chosen on the basis of the concentration range expected in a particular study. A calibration curve should consist of a blank sample (matrix sample processed without internal standard), a zero sample (matrix sample processed with internal standard) and six to eight non-zero samples covering the expected range, including lower limit of quantification.

3. Lower Limit of Quantification (LLOQ)

The lowest standard on the calibration curve should be accepted as the limit of quantification if the following condition is met: Analyte peak (response) should be identifiable, discrete and reproducible with a

precision of 20% and accuracy of 80-120%.

4. Calibration Curve/Standard Curve/Concentration-Response

The simplest model that adequately describes the concentration-response relationship should be used. Selection of weighting and use of a complex regression equation should be justified. The following conditions should be met in developing a calibration curve: 20% deviation of the lower limit of quantification from nominal concentration 15% deviation of standards other than lower limit of quantification from nominal

concentrationAt least four out of six non-zero standards should meet the above criteria,

including the lower limit of quantification and the calibration standard at the highest concentration. Excluding the standards should not change the model used.

Accuracy and Precision

Accuracy and precision of the assay should be determined for both intra- and inter-runs. They are determined by analyzing quality control (QC) samples at a minimum of 3 concentrations (low, mid and high), representing the entire range of the calibration curve. The concentration of low quality control should be near the lower limit of quantification (no more than 3 times the lower limit of quantification concentration). The mid quality control concentration should be somewhere in the middle of the calibration range. It is recommended that the mid- quality control concentration be near the geometric mean of the low quality control and high quality control concentrations. The high quality control concentration should be near the upper end of the calibration curve (within the upper quartile of the calibration range). At least five replicates at each concentration should be analyzed. In addition to determining the accuracy and precision of these quality control samples, the accuracy and precision at the lower limit of

39

Page 32: Anna Review of Literature

quantification level should also be determined as described in the “Sensitivity” section. For intra-run accuracy and precision, the mean and coefficients of variation of observed quality control concentrations within a run should be determined. The mean of the observed concentrations should be within ±15% of the nominal at all concentrations of the quality control samples. Coefficients of variation (indicating precision) around the mean observed concentration should not exceed 15% at all concentrations. For accuracy and precision, the mean and coefficients of variation of the quality control samples at each concentration from multiple runs (at least three) should be determined. The mean observed concentration should be within ±15% of the nominal concentration and the coefficients of variation should be less than 15%, at all concentration.

Sensitivity:-

Sensitivity of the method is defined as the lowest concentration that can be measured with an acceptable limit of accuracy and precision. The accuracy and precision at the lower limit of quantification (LLOQ) should be determined by analyzing at least 5 replicates of the sample at the lower limit of quantification (LLOQ) concentration on at least one of the validation days. These samples should be independent of those used for construction of the calibration curve. The accuracy as determined by the relative error (RE %) at this concentration should be within ± 20% and the coefficients of variation should be less than 20%.

Recovery :-

Recovery pertains to the extraction efficiency of an analytical method within the limits of variability. Recovery of the analyte need not be 100%, but the extent of recovery of an analyte and of the internal standard should be consistent, precise and reproducible. Recovery experiments should be performed by comparing the analytical results for extracted samples at three concentrations (low, medium and high) with unextracted standards that represent 100% recovery.

Stability

Several types of stability should be evaluated during the validation. Stability procedures should evaluate the stability of the analytes during sample collection and handling, after long-term (frozen at the intended storage temperature) and short-term (bench top, room temperature) storage and after going through freeze and thaw cycles and the analytical process. All stability determinations should use a set of samples prepared from a freshly made stock solution of the analyte in the appropriate analyte-free, interference-free biological matrix. Suggested experiments to determine stability are provided below.

a) Freeze and Thaw Stability40

Recovery = Detector response of an analyte from an extracted sampleDetector response of the analyte from an unextracted sample.

Page 33: Anna Review of Literature

Analyte stability should be determined after three freeze and thaw cycles. At least three aliquots at each of the low and high concentrations should be stored at the intended storage temperature for 24 hours and thawed unassisted at room temperature. When completely thawed, the samples should be refrozen for 12 to 24 hours under the same conditions. The freeze–thaw cycle should be repeated two more times and then analyzed on the third cycle. If an analyte is unstable at the intended storage temperature, the stability sample should be frozen at -700C during the three freeze and thaw cycles.

b) Long-Term Stability

The storage time in a long-term stability evaluation should exceed the time between the date of first sample collection and the date of last sample analysis. Long-term stability should be determined by storing at least three aliquots of each of the low and high concentrations under the same conditions as the study samples. In consideration of this, there may be the need to include both – 70°C and – 20°C evaluations. The volume of samples should be sufficient for analysis on three separate occasions. The concentrations of all the stability samples should be compared to the mean of back-calculated values for the standards at the appropriate concentrations from the first day of long-term stability testing.

c) Stock Solution Stability

The stability of stock solutions of drug and the internal standard should be evaluated at room temperature for at least six hours. If the stock solutions are refrigerated or frozen for the relevant period, the stability should be documented. After completion of the desired storage time, the stability should be tested by comparing the instrument response with that of freshly prepared solutions.

d) Auto sampler stability

The stability of processed samples, including the resident time in the auto sampler, should be determined. This stability is determined for 24 to 96 hours to cover the anticipated run time for the analytical batch. The extracted quality control samples (ready to inject) are kept at auto sampler temperature for the established time and analyzed with fresh standards.

Reinjection Reproducibility

Reinjection Reproducibility is performed to established that the reinjection of the sample kept in the auto sampler at controlled temperature has no effect on the result reproducibility. Reinjection Reproducibility is carried out by reinjection of a complete accuracy and precision batch or Ruggedness batch after storage in auto sampler for a minimum of approximately 24 hours.

Dilution Integrity

41

Page 34: Anna Review of Literature

Dilution integrity is performed on at least one day of validation. One or more additional quality control samples at concentrations several times higher than the upper limit of the calibration curve should be prepared, covering the maximum expected dilution. These quality control samples are diluted with blank matrix to bring the concentration to within the calibration range and then analyzed. The acceptance criteria for the diluted quality control are the same as provided in the “Accuracy and Precision” section. Dilution integrity is performed on at least 1 day of validation. If during sample analysis a dilution higher than the one covered during validation is needed, further dilution can be validated during samples analysis by analyzing the required diluted quality control samples.

Ruggedness

Robustness is a measure of the method's capability to remain unaffected by small, but deliberate variations in method parameters. Testing of various conditions, e.g., age of columns, column type, column temperature, pH of buffer in mobile phase, reagents, is normally performed.Specific Recommendations for Method Validation -

The matrix-based standard curve should consist of a minimum of six standard points, excluding blanks, using single or replicate samples. The standard curve should cover the entire range of expected concentrations.

Standard curve fitting is determined by applying the simplest model that adequately describes the concentration-response relationship using appropriate weighting and statistical tests for goodness of fit.

LLOQ is the lowest concentration of the standard curve that can be measured with acceptable accuracy and precision. The LLOQ should be established using at least five samples independent of standards and determining the coefficient of variation and/or appropriate confidence interval. The LLOQ should serve as the lowest concentration on the standard curve and should not be confused with the limit of detection and/or the low QC sample. The highest standard will define the upper limit of quantification (ULOQ) of an analytical method.

For validation of the bioanalytical method, accuracy and precision should be determined using a minimum of five determinations per concentration level (excluding blank samples).

The mean value should be within ±15% of the theoretical value, except at LLOQ, where it should not deviate by more than ±20%. The precision around the mean value should not exceed 15% of the CV, except for LLOQ, where it should not exceed 20% of the CV. Other methods of assessing accuracy and precision that meet these limits may be equally acceptable.

The accuracy and precision with which known concentrations of analyte in biological matrix can be determined should be demonstrated. This can be accomplished by analysis of replicate sets of analyte samples of known concentrations QC samples from an equivalent biological matrix. At a minimum, three concentrations representing the entire range of the standard curve should be studied: one within 3x

42

Page 35: Anna Review of Literature

the lower limit of quantification (LLOQ) (low QC sample), one near the center (middle QC) and one near the upper boundary of the standard curve (high QC).

Reported method validation data and the determination of accuracy and precision should include all outliers; however, calculations of accuracy and precision excluding values that are statistically determined as outliers can also be reported.

The stability of the analyte in biological matrix at intended storage temperatures should be established. The influence of freeze-thaw cycles (a minimum of three cycles at two concentrations in triplicate) should be studied.

The stability of the analyte in matrix at ambient temperature should be evaluated over a time period equal to the typical sample preparation, sample handling and analytical run times.

Reinjection reproducibility should be evaluated to determine if an analytical run could be reanalyzed in the case of instrument failure.

The specificity of the assay methodology should be established using a minimum of six independent sources of the same matrix. For hyphenated mass spectrometry-based methods, however, testing six independent matrices for interference may not be important.

In the case of LC-MS and LC-MS-MS-based procedures, matrix effects should be investigated to ensure that precision, selectivity and sensitivity will not be compromised.

Method selectivity should be evaluated during method development and throughout method validation and can continue throughout application of the method to actual study samples.

Acceptance/rejection criteria for spiked, matrix-based calibration standards and validation QC samples should be based on the nominal (theoretical) concentration of analytes. Specific criteria can be set up in advance and achieved for accuracy and precision over the range.

43

Page 36: Anna Review of Literature

HPLC & Mass spectrometer were used during Instrument used during bioanalysis.

High Performance Liquid Chromatography (HPLC)

Introduction:-

HPLC is one mode of chromatography; the most widely used analytical technique. Chromatography is the collective term for a family of laboratory techniques for the separation of mixtures. It involves passing a mixture dissolved in a “mobile phase” through a stationary phase, which separate s the analyte to be measured from other molecules in the mixture and allows it to be isolated.

History of chromatographyThe word chromatography comes from the Greek words for color “chroma” and

writes "graphein". So chromatography means 'to write with color'. Chromatography was first developed and defined by the russian botanist Mikhail Tswett(1) (1872-1919) in 1903. He produced a colorful separation of plant pigments using a column of calcium carbonate (chalk). Tswett stated:

“Chromatography is a method in which the components of a mixture are separated on an adsorbent column in a flowing system”.

Although color has little to do with modern chromatography, the name has persisted and, despite its irrelevance, is still used for all separation techniques that employs the essential requisites for a chromatographic separation, viz. a mobile phase and a stationary phase. The technique, as described by Tswett was largely ignored for a along time and it was not until the late 1930s and early 1940s that Martin and Synge(2) introduced liquid-liquid chromatography by supporting the stationary phase, in this case water, on silica in a

44

Page 37: Anna Review of Literature

packed bed and used it to phase by a suitable gas, as the transfer of sample between the two phases would be faster, and thus provide more efficient separations. In this manner, the concept of gas chromatography was created but again, little notice was taken of the suggestion and it was left to Martin himself and A. T. James to bring the concept to practical reality nearly a decade later. In the same publication in 1941, the essential requirements for HPLC (High Performance Liquid Chromatography) were unambiguously.

De finition:-

Chromatography is a separation process that is achieved by distributing the components of a mixture between two phases, a stationary phase and a mobile phase. Those components held preferentially in the stationary phase are retained longer in the system than those that are distributed selectively in the mobile phase. As a consequence, solutes are eluted from the system as local concentrations in the mobile phase in the order of their increasing distribution coefficients with respect to the stationary phase.

Fundamental Theory of Chromatography

High-performance liquid chromatography (HPLC) is derived form liquid and chromatography to separate compounds that are dissolved in solution. LC utilizes a liquid mobile phase to separate the components of a mixture. These components (or analytes) are first dissolved in a solvent, and then forced to flow through a chromatographic column under a high pressure. In the column, the mixture is resolved into its components. The amounts of resolution are important, and are dependent upon the extent of interaction between the solute components and the stationary phase. Liquid chromatography (LC) is a separation technique in which mobile phase is a liquid and carried out either in a column or a plane. Liquid chromatography that generally utilizes very small packing particles and a relatively high pressure referred to as high performance liquid chromatography (HPLC).

In HPLC Instead of a solvent being allowed to drip through a column under gravity, it is forced through under high pressures of up to 400 atmospheres. That makes it much faster.In the HPLC technique, the sample is forced through a column that is packed with irregularly or spherically shaped particles or a porous monolithic layer (stationary phase) by a liquid (mobile phase at high pressure.It allows using a very much smaller particle size for the column packing material which gives a much greater surface area for interactions between the stationary phase and the molecules flowing past it. This allows a much better separation of the components of the mixture.

The other major improvement over column chromatography concerns the detection methods which can be used. These methods are highly automated and extremely sensitive.

45

Page 38: Anna Review of Literature

Principle of chromatography

The common feature is that two mutually immiscible phases are brought into contact with each other. One of these phases is stationary, while the other is mobile; the mobile phase either moves over the surface or percolates through the interstices of the stationary phase. The sample mixture, introduce into the mobile phase undergoes repeated interactions (partitions) between

The stationary and mobile phase while being carried through the system by the mobile phase. Different components of the sample mixture interact with the two phases differentially on the basis of small differences in their physico-chemical properties. Since these different rates of interactions govern the migration of the sample components through the system, each one of the components migrates at a different rate. The compound which interacts more with the mobile phase and least with the stationary phase migrates fast. The component showing least interaction with the mobile phase while interacting strongly with the stationary phase migrates slowly (retarded). This differential movement of components is responsible for their ultimate separation from each other.

When the solute is allowed to equilibrate itself between two equal volumes of two immiscible liquids, the ratio of the concentration of the solute in the two phases at equilibrium at a given temperature is called the partition coefficient. Partition coefficient is also known as distribution coefficient.The concentration of the compound in each of the phases is described by the partition coefficient, K, which is expressed as follows-

K=Cs/Cm

Where Cs and Cm are the concentration of the compound in the stationary and the mobile phase. If the partition coefficient of a substance between cellulose (stationary phase) and carbon tetrachloride (mobile phase) is 0.2, it means that the concentration of the substance in carbon tetrachloride is five times that in cellulose. The concept of partition coefficient is the basic principle of all chromatographic methods.

The resolving power of a chromatographic column increases with column length and the number of theoretical plates per unit length, although there are limits to the length of a column owing to the problem of peak broadening. As the number of theoretical plates is related to the surface area of the stationary phase, it follows that the smaller the particle size of the stationary phase, the better the resolution. Unfortunately, the smaller the particle size, the greater is the resistance to flow of mobile phase. This creates a back pressure in the column that is sufficient to damage the matrix structure of the stationary phase, thereby actually reducing eluent flow and impairing resolution.

Types of chromatography

HPLC is historically divided into two different sub-classes based on the polarity of the mobile and stationary phases:-

46

Page 39: Anna Review of Literature

(i) Normal phase HPLC-

Techniques in which the stationary phase is more polar than the mobile phase (e.g. toluene is the mobile phase, silica is the stationary phase) is Called normal phase chromatography (NPLC). The column is filled with tiny silica particles, and the solvent is non-polar - hexane, for example. A typical column has an internal diameter of 4.6 mm (and may be less than that), and a length of 150 to 250 mm.Polar compounds in the mixture being passed through the column will stick longer to the polar silica than non-polar compounds will. The non-polar ones will therefore pass more quickly through the column.

(ii) Reversed phase HPLC –

In this case, the column size is the same, but the silica is modified to make it non-polar by attaching long hydrocarbon chains to its surface - typically with either 8 or 18 carbon atoms in them. A polar solvent is used - for example, a mixture of water and an alcohol such as methanol.In this case, there will be a strong attraction between the polar solvent and polar molecules in the mixture being passed through the column. There won't be as much attraction between the hydrocarbon chains attached to the silica (the stationary phase) and the polar molecules in the solution. Polar molecules in the mixture will therefore spend most of their time moving with the solvent.Non-polar compounds in the mixture will tend to form attractions with the hydrocarbon groups because of van der Waals dispersion forces. They will also be less soluble in the solvent because of the need to break hydrogen bonds as they squeeze in between the water or methanol molecules, for example. They therefore spend less time in solution in the solvent and this will slow them down on their way through the column.That means that now it is the polar molecules that will travel through the column more quickly.Reversed phase HPLC is the most commonly used form of HPLC.

Instrumentation :-

High-performance liquid chromatography (HPLC) is a form of liquid chromatography to separate compounds that are dissolved in solution. HPLC instruments consist of

1.) Mobile phase reservoir, filtering- To store the mobile phase. 2.) Pumps- To push the mobile phase through the column.3.) Columns- In which the separation will take place. 4.) Detectors- Used in detecting the concentration of the sample components as they

come out of the column.5.) Injectors- A device to inject the sample into the mobile phase.6.) Data systems7.) Back pressure regulator8.) Solvent reservoir and the solvents

47

Page 40: Anna Review of Literature

Fig: Schematic of HPLC instrumentation

1.) Mobile phase reservoir, filtering

The solvent reservoir should meet the following criteria:

(i) It must contain volume enough for repetitive analysis(ii) It must have a provision for degassing the solvents

(iii) It must be inert to the solvent.

Solvent degassing within the reservoir is performed usually by heating or by application of vacuum or by treating it with ultra sonic sounds. Occasionally sparging with helium might be used for degassing.An HPLC system begins with the solvent reservoir, which contains the solvent used to carry the sample through the system. The solvent should be filtered with an inlet solvent filter to remove any particles that could potentially damage the system's sensitive components.The most common type of solvent reservoir is a glass bottle. Most of the manufacturers supply these bottles with the special caps, Teflon tubing and filters to connect to the pump inlet and to the purge gas (helium) used to remove dissolved air. Helium purging and storage of the solvent under helium was found not to be sufficient for degassing of aqueous solvents. It is useful to apply a vacuum for 5-10 min. and then keep the solvent under a helium atmosphere.

2.) Pumps

High-pressure pumps are needed to force solvents through packed stationary phase beds. Smaller bed particles require higher pressures. There are many advantages to Using smaller particles, the most important advantages are: higher resolution, faster analyses, and increased sample load capacity, many separation problems can be resolved with

48

Page 41: Anna Review of Literature

larger particle packing that requires less pressure. Flow rate stability is another important pump feature that distinguishes pumps. Very stable flow rates are usually not essential for analytical chromatography. However, if the user plans to use his system in size exclusion mode, then he has to have a pump, which provides an extremely stable flow rate. .

Modern pumps have the following parameters:

Generate pressure up to 5000 PSI Pulse free output.

Flow rate 0.1 to 10 ml/min

Flow control and flow reproducibility.

3.) Columns:

Basically two types of column is used in H.P.L.C

1) Analytical column2) Guard column

The separation is takes place into the analytical column However, a sacrificial guard column is often included just prior to the analytical column to chemically remove components of the sample that would otherwise foul the main column.Typical LC columns are 10, 15 and 25 cm in length and are fitted with extremely small diameter (3, 5 or 10 mm) particles. The internal diameter of the columns is usually 4 or 4.6 mm; this is considered the best compromise among sample capacity.

4.) Detectors:

The most commonly used detector in LC are

Refractive index49

Page 42: Anna Review of Literature

Ultraviolet

Fluorescence

Conductivity

Mass-spectrometric (LC/MS)

These detectors pass a beam of light through the flowing column effluent as it passes through a low volume (~ 10 ml) flow cell. The variations in light intensity caused by UV absorption, fluorescence emission, or change in refractive index (depending on the type of detector used) from the sample components passing through the cell, are monitored as changes in the output voltage. These voltage changes are recorded on a strip chart recorder and frequently are fed into an integrator or computer to provide retention time and peak area data.

The RI detector is universal but also the less sensitive one.. The MS detector is the most powerful one but it still the most complicated and most expensive.

Ultraviolet detectors:There are two types of UV-visible detector

1. Fixed Wavelength 2. Variable-wavelength

Variable-wavelength detectors:

Detectors, which allow the selection of the operating wavelength called variable wavelength detectors and they are, are particularly useful in three cases:

Offer best sensitivity for any absorptive component by selecting an appropriate wavelength

Individual sample components have high absorptive at different wavelengths and thus, operation at a single wavelength would reduce the system's sensitivity.

50

Page 43: Anna Review of Literature

Fluorescence detectors

Fluorescence detectors are probably the most sensitive among the existing modern HPLC detectors. It is possible to detect even a presence of a single analyze molecule in the flow cell.  Typically, fluorescence sensitivity is 10 -1000 times higher than that of the UV detector for strong UV absorbing materials. Fluorescence detectors are very specific and selective among the others optical detectors.

When compounds having specific functional groups are excited by shorter wavelength energy and emit higher wavelength radiation, which called fluorescence. Usually, the emission is measured at right angles to the excitation.

5.) Injectors Sample introduction can be accomplished in various ways. The simplest method is to use an sample injector (injection valve). This valve allows for the reproducible introduction of sample into the flow path. In more sophisticated LC systems, automatic sampling devices are incorporated where sample introduction is done with the help of auto samplers and microprocessors.

51

Page 44: Anna Review of Literature

In liquid chromatography, liquid samples may be injected directly and solid samples need only be dissolved in an appropriate solvent.

6.) Data systems

Since the detector signal is electronic, use of modern data acquisition techniques can aid in the signal analysis the main goal in using electronic data systems is to increase analysis accuracy and precision, while reducing operator attention. There are several types of data systems, each differing in terms of available

7.) Backpressure regulator

As a final system enhancement, a backpressure regulator is often installed immediately after the detector. This device prevents solvent bubble formation until the solvent is completely through the detector. This is important because bubbles in a flow cell can interfere with the detection of sample components.

MASS SPECTROMETER

INTRODUCTION :-

Mass spectrometry (MS) has emerged as an indispensable analytical technique. Today, liquid chromatography (LC) coupled to MS is frequently used in routine qualitative and quantitative analysis in the laboratories. Historically, the issues of concern in MS research have mainly been the development and refinement of the hardware employed, whereas now a lot of emphasis is put on data evaluation and interpretation. The possibility to discover new biomarkers using MS is a rapidly growing field of research. Biomarkers are compounds that potentially can be used for early diagnostic or disease/treatment surveillance purposes. The measurement should then convey information of the biological condition being tested. Sceptics, however, stress the difficulties of validating the analytical procedures involved.Even though MS has become a mature technique, there are still limitations to the instrumentation available today. Especially in quantitative analysis, many researchers report unwanted increase or decrease of the MS signal due to other constituents in the sample, which potentially will impair the analysis. Therefore, the quest for more robust technical solutions and analytical methods will continue. The potential of MS, and also LC-MS, is recognized in the huge amount of data produced. However, in order to fully appreciate the data gathered, proper techniques for data treatment and evaluation have to be incorporated. Coupling of LC to mass spectrometry provides additional sensitivity and

52

Page 45: Anna Review of Literature

most notably, high selectivity for the analysis of drug conjugates (Baillie 1992). Early work in mass spectrometric analysis of glucuronides was done using ionisation methods such as electron impact and chemical ionisation (Perchalski et al. 1982, Bruins 1981), thermospray (TSP) (Liberato et al.1983, Rudewicz and Straub 1986) and fast atom bombardment (FAB) (Rudewicz and Straub 1986, Fenselau et al. 1984). In the late 80’s and early 90’s, the majority of glucuronide studies employed TSP-LC−MS and FAB-MS, as reviewed by Baillie in 1992. The sensitivity of these techniques is highly dependent upon analyte type, however, and TSP mass spectra are often affected by temperatures at the interface. Introduction of atmospheric pressure ionisation (API) techniques, in which the eluent from LC is nebulized the ion source at atmospheric pressure, has made coupling of LC to MS easy and nowadays practically all LC−MS methods, includingthose in glucuronide analysis, rely on these ionisation techniques (Niessen 2000). Interfaces for atmospheric pressure chemical ionisation and electrospray ionisation are compatible with reversed-phase solvent systems, suitable for a w ide range of structural types, easy to operate, and available from all major mass spectrometer manufacturers. More recently a new interface, based on atmospheric pressure photoionisation (APPI), has been introduced to the market (Robb et al. 2000).API techniques are soft ionisation methods that yield abundant even-electron parent ions([M+H]+ or [M-H]-), which indicate the molecular weight of the unknown compound. The spectra obtained by soft ionisation methods often suffer from the absence of significant fragment ions, but this problem can be solved by the use of tandem mass spectrometry. MS/MS is an irreplaceable tool in the structure elucidation of drug metabolites and utilised also in the “metabolic mapping” of drug substances as first outlined in 1982 (Perchalski et al. 1982) and further developed shortly thereafter (Rudewicz and Straub 1986). In the “metabolic mapping” procedure, possible metabolites are searched by precursor ion scanning or neutral loss scanning in collision induced dissociation (CID) conditions. The parent ion found with this procedure is selected for CID and the molecular structure of the compound is further confirmed by a daughter ion spectrum.

Tandem Mass Spectrometry (MS–MS)

The great strength of mass spectrometry as a technique is that it can provide both the molecular weight of an analyte (the single most discriminating piece of information in structure elucidation) and information concerning the structure of the molecule involved. The ionization techniques most widely used for liquid chromatography-mass spectrometry, however, are termed ‘soft ionization’ in that they produce primarily molecular species with little fragmentation. It is unlikely that the molecular weight alone will allow a structural assignment to be made and it is therefore desirable to be able to generate structural information from such techniques. There are two ways in which this may be done, one of which, the so-called ‘cone-voltage’ or ‘in-source’ fragmentation, is associated specifically with the ionization techniques of electrospray Ionization and atmospheric pressure chemical ionization. There are a large number of different tandem mass spectrometry experiments that can be carried out but the most widely used techniques are:

1. Product-ion scan

53

Page 46: Anna Review of Literature

2. Precursor-ion scan3. Constant-neutral-loss scan4. Selected decomposition monitoring

1. Product-ion scanIn this scan, the first stage of mass spectrometry (MS1) is used to isolate an ion of

interest in liquid chromatography-mass spectrometry, this is often the molecular species from the analyte. Fragmentation of the ion is then effected; the means by which this is achieved is dependent on the type of instrument being used but is often by collision with gas molecules in a collision cell, i.e. MS2 in the triple quadrupole.

2. Precursor-ion scanIn this scan, the second stage of mass spectrometry (MS3) is set to transmit a

single m/z ratio, namely that of the product (fragment) ion of interest, while the first stage (MS1) is set to scan through the mass range of interest, with the fragmentation of ions passing through MS1 being again carried out in MS2, the collision cell. A signal is seen at the detector only when ions are being transmitted by both MS1 and MS3, i.e. when an ion being transmitted by MS1 fragments to give the desired ion. The ion-trap and Q–ToF instruments are, because of the way that they operate, unable to carry out precursor-ion scans.

3. Constant-neutral-loss scanIn addition to simple fragmentation reactions it has been pointed out that

molecule containing certain structural features also undergo rearrangement reactions in the source of the mass spectrometer. Probably the best known of these is the McLafferty rearrangement, in which a hydrogen atom migrates to an unsaturated group (often a carbonyl group) with elimination of a neutral molecule. These rearrangement reactions may also occur in MS–MS instruments and the constant-neutral-loss scan enables the analyst to observe all of the ions in the mass spectrum that fragment with a particular mass loss and therefore contain a specific structural feature. Again, ion-traps and Q–ToF instruments are not capable of carrying out this type of scan.

4. Selected-decomposition monitoringThe tandem mass spectrometry equivalent of this technique is known as selected-

decomposition monitoring (SDM) or selected-reaction monitoring (SRM), in which the fragmentation of a selected precursor ion to a selected product ion is monitored. This is carried out by setting each of the stages of mass spectrometry to transmit a single ion, i.e. the precursor ion by MS1 and the product ion by MS3. If several different reactions are monitored the term multiple reaction monitoring (MRM) is used.

Instrumentation

1. The Triple Quadrupole:

The arrangement of three quadrupoles was first developed by Jim Morrizon of LaTrobe University, Australia for the purpose of studying the photo dissociation of gas-

54

Page 47: Anna Review of Literature

phase ions. The first triple-quadrupole mass spectrometer was developed at Michigan State University by Dr. Christie Enke and graduate student Richard Yost in the late 1970's. The triple quadrupole is probably the most widely used tandem mass spectrometry instrument. The hardware, as the name suggests, consists of three sets of quadrupole rods in series .The second set of rods is not used as a mass separation device but as a collision cell, where fragmentation of ions transmitted by the first set of quadrupole rods is carried out and as a device for focusing any product ions into the third set of quadrupole rods. Both sets of rods may be controlled to allow the transmission of ions of a single m/z ratio or a range of m/z values to give the desired analytical information. The schematic representation of triple quadrupole mass spectrometer is given in figure 1.

Figure 1 Schematic of a triple quadrupole mass spectrometer

2. The Hybrid Mass Spectrometer

When the first quadrupole of a triple quadrupole is replaced by a double-focusing mass spectrometer, the instrument is termed a hybrid (i.e. a hybrid of magnetic sector and quadrupole technologies). The advantage of this configuration is that the MS1 instrument can be used under high-resolution conditions to select the ion of interest. The schematic representation of hybrid MS–MS instrument given in figure 2.

Figure 2 Schematic of a hybrid MS–MS instrument

3. The Quadrupole–Time-of-Flight Instrument

In this instrument, the final stage of the triple quadrupole is replaced by an orthogonal time-of-flight (ToF) mass analyser. The configuration is typical of the latest generation of ToF instruments in which a number of reflectrons, in this case two, are used to increase the flight path of the ions and thus increase the resolution that may be achieved. To reiterate, in contrast to other mass analyzers which are scanned sequentially

55

Page 48: Anna Review of Literature

through the m/z range of interest and provide MS–MS spectra of user-selected masses, the ToF analyser detects all of the ions that enter it at a specific time. It is therefore possible, particularly in view of the high-scan-speed capability of this instrument, to provide, continuously, a full MS–MS product ion spectrum of each ion produced in the source of the mass spectrometer. The disadvantage of this mode of operation is that it renders the Q–ToF system unable to carry out precursor and constant-neutral-loss scans. The schematic representation of quadrupole–time-of-flight mass spectrometer is given in figure 3.

Figure 3 Schematic of a quadrupole–time-of-flight mass spectrometer

Tandem Mass Spectrometry on the Ion-Trap: This type of system differs from those described previously in that the tandem

mass spectrometry capability is associated only with the way in which the ion-trap is operated, i.e. it is software controlled and does not require the addition of a collision cell and a further analyzer. This is because ion selection, decomposition and the subsequent analysis of the product ions are all carried out in the same part of the instrument, with these processes being separated solely in time, rather than time and space as is the case for the instruments described previously. As with the Q–ToF instrument, only two types of tandem mass spectrometry experiment are available with the ion-trap, i.e. the product-ion scan and selected-decomposition monitoring.Ionization and Detection in Tandem Mass Spectrometer:-

Electrospray Ionization (ESI)

Electrospray ionization (ESI) is a technique used in mass spectrometry to produce ions. The development of electrospray ionization for the analysis of biological macromolecules was rewarded with the attribution of the Nobel Prize in Chemistry to John Bennett Fenn in 2002.35-37 The schematic representation of an electrospray LC–MS interface is given in figure 4.

56

Page 49: Anna Review of Literature

Figure 4 Schematic of an electrospray LC–MS interface

The liquid flow from the high performance liquid chromatography pump enters through a metal capillary maintained at high voltage (typically3–4 kV). This high voltage disperses the liquid stream, forming a mist of highly charged droplets that undergo desolvation during their passage across the source of the mass spectrometer. As the size of the droplet reduces, a point is reached (within 100 μs) at which the repulsive forces between charges on the surface of the droplets are sufficient to overcome the cohesive

57

Page 50: Anna Review of Literature

forces of surface tension. A ‘Columbic explosion’ then occurs, producing a number of smaller droplets with a radius approximately 10% of that of the parent droplet. A series of such explosions then takes place until a point is reached at which ions of the appropriate analytes dissolved in these droplets are produced and are transferred through a series of focusing devices (lenses) into the mass spectrometer. There are alternative explanations for the actual mechanism by which these ions are produced, e.g. the ion-evaporation and charge-residue models and these have been debated for some time. According to the ion-evaporation model, the droplets become smaller until a point is reached at which the surface charge is sufficiently high for direct ion evaporation into the gas phase to occur. In the case of the charge-residue model, repeated Coulombic explosions take place until droplets are formed that contain a single ion. Evaporation of the solvent continues until an ion is formed in the vapour phase. It is now probably the most widely used liquid chromatography-mass spectrometry interface as it is applicable to a wide range of polar and thermally labile analytes of both low and high molecular weight and is compatible with a wide range of high performance liquid chromatography conditions. There are various advantage and disadvantage associated with electrospray ionization.

Advantages

Ionization occurs directly from solution and consequently allows ionic and thermally labile compounds to be studied.

Mobile phase flow rates from nL/min to in excess of 1 mL/ min can be used with appropriate hardware, thus allowing conventional and microbore columns to be employed.

For high-molecular-weight materials, an electrospray spectrum provides a number of independent molecular weight determinations from a single spectrum and thus increased precision.

Disadvantages Electrospray is not applicable to non-polar or low-polarity compounds. Suppression effects may be observed and the direct analysis of mixtures is not always

possible. This has potential implications for co-eluting analytes in liquid chromatography-mass spectrometry.

Electrospray is a soft-ionization method producing intact molecular species and structural information is not usually available. Electrospray sources are capable of producing structural information from cone-voltage fragmentation but these spectra are not always easily interpretable. Experimentally, the best solution is to use a mass spectrometer capable of tandem mass spectrometry operation but this has not inconsequential financial implications.

Atmospheric Pressure Chemical Ionization (APCI)Atmospheric pressure chemical ionization (APCI) is an ionization method used in

mass spectrometry. It is a form of chemical ionization which takes place at atmospheric pressure. Atmospheric pressure chemical ionization allows for the high flow rates typical of standard bore high performance liquid chromatography to be used directly, often without diverting the larger fraction of volume to waste. Typically the mobile phase

58

Page 51: Anna Review of Literature

containing eluting analyte is heated to relatively high temperatures (above 400 degrees Celsius), sprayed with high flow rates of nitrogen and the entire aerosol cloud is subjected to a corona discharge that creates ions. This is basically a gas phase ionization, unlike ESI which is a liquid phase ionization process. Also, we can use non-polar solvent for solution making instead of polar solvent for supporting ions in solution as gaseous state conversion of solvent before reaching to corona discharge pin is carried out here, which well supports the ions formed. Typically, Atmospheric pressure chemical ionization is a less "soft" ionization technique than ESI, i.e. it generates more fragment ions relative to the parent ion. There are various advantage and disadvantage associated with atmospheric pressure chemical ionization. The schematic representation of an atmospheric pressure chemical ionization interface is given in figure 5.

Figure 5 Atmospheric Pressure Chemical Ionization interface

Advantages

Atmospheric pressure chemical ionization produces ions from solution and while the analyte experiences more heat than with electrospray, compounds with a degree of thermal instability may be studied without their decomposition.

Atmospheric pressure chemical ionization is best applied to compounds with low to moderately high polarities.

Atmospheric pressure chemical ionization is a soft ionization technique which usually enables the molecular weight of the analyte under study to be determined.

Atmospheric pressure chemical ionization is able to deal with flow rates up to 2 mlmin−1

Atmospheric pressure chemical ionization is more tolerant to the presence of buffers in the mobile phase stream than is ESI.

Atmospheric pressure chemical ionization is more tolerant to changes in experimental conditions than is ESI and a range of mobile phases, including gradient elution, may therefore be accommodated using a single set of experimental conditions.

Disadvantages Atmospheric pressure chemical ionization spectra can contain ions from adducts of

the analyte with the high performance liquid chromatography mobile phase or organic modifiers, such as ammonium acetate, that may be present. Structural information is not usually available unless cone-voltage fragmentation or MS–MS is used.

59

Page 52: Anna Review of Literature

Atmospheric pressure chemical ionization is not able to function effectively at very low flow rates.

Atmospheric pressure chemical ionization is not suitable for analytes that are charged in solution.

Ion Detection:-

Channel Electron Multiplier

Channel electron multiplier consists of a single-particle detector which in its basic form consists of a hollow tube (channel) of either glass or ceramic material with a semi conducting inner surface. The detector responds to one or more primary electron impact events at its entrance (input) by producing, in a cascade multiplication process, a charge pulse of typically 104−108 electrons at its exit (output). Because particles other than electrons can impact at the entrance of the channel electron multiplier to produce a secondary electron, which is then subsequently multiplied in a cascade, the channel electron multiplier can be used to detect charged particles other than electrons (such as ions or positrons), neutral particles with internal energy (such as metastable excited atoms) and photons as well. As a result, this relatively simple, reliable and easily applied device is employed in a wide variety of charged-particle and photon spectrometers and related analytical instruments, such as residual gas analyzers, mass spectrometers and spectrometers used in secondary ion mass spectrometry (SIMS), electron spectroscopy for chemical analysis (ESCA) and Auger electron spectroscopy.

60