Home » Articles posted by wusarah (Page 2)

Author Archives: wusarah

Want to Get Involved In Research?

[su_heading size="15" margin="0"]The BioInnovation Group is an undergraduate-run research organization aimed at increasing undergraduate access to research opportunities. We have many programs ranging from research project teams to skills training (BIG-RT) and Journal Club.

If you are an undergraduate interested in gaining research experience and skills training, check out our website (https://bigucd.com/) to see what programs and opportunities we have to offer. In order to stay up to date on our events and offerings, you can sign up for our newsletter. We look forward to having you join us![/su_heading]

Newest Posts

Vocal Communication in the Domestic Dog

By Sarah Su, Animal Science, ’24

 

Abstract

Companion animal species have multiple forms of communication, including tactile, visual, olfactory, and auditory signals. This paper will focus on vocal communication in canines, comparing the behaviors of wolves to that of dogs. As a result of domestication, most dog species show marked differences compared to the ancestral wolf in vocalization range, frequency, and function – wolves receive natural pressure to communicate with other wolves while humans have spent many generations artificially selecting for effective human-dog communication. This reflects the noise-related problems of dogs living in modern human society such as excessive barking and noise-triggered howling. Thus, understanding vocal communication in dogs (and its origins) is key to implementing solutions such as proper socialization, counterconditioning, and desensitization. 

Introduction

Vocalization as a communication behavior is especially distinctive in the canidae family, which includes dogs, wolves, jackals, foxes and coyotes. Wild canids can produce a variety of sounds including yelps, woofs/coughs, yips, screams, barks, growls, coos, howls, mews, grunts, and clicks [1]. The vocalization range of the domestic dog is similar, including: barks, howls, growls, whines, yelp, snores, groans, and grunts [2]. Of these, the quieter sounds are often used in casual contexts while the louder vocalizations can be utilized for more aggressive communication [3]. Though vocalizing is a universally observed trait in the canid family, the use of these vocalizations differ across species:, it was found that the domestic dog communicates by barking more often than howling, in contrast to their closest relative, the wolf [1]. This difference can be explained by the process of domestication – choosing different selective pressures, emphasizing interspecies communication, and changing the dog’s space/territory [3]. This paper will focus specifically on the differences between howling and barking vocalizations, and how they differ between the domestic dog and their ancestral wolf species. Additionally, this paper will touch on the impact of loud noises made by the modern dog when living in densely populated areas.

Barking vs. Howling

The sounds of barking versus howling differ from each other in volume, length and function, to the point where non-canine species may distinguish between them as well. A “bark” is a series of “noisy loud bursts with a medium fundamental frequency of 150-2,000 Hz” [3]. On the other hand, howls are much more variable, defined as “high amplitude, long range extended calls (1–10 s long) often with undulating fundamental frequencies varying between 150 and 2,000 Hz” [1]. The duration of a bark is considerably shorter than a howl, but howls are meant to carry for longer distances. Another large difference between the vocalizations is that barking is usually directed at a specific target/aggressor, while howling is nondirectional [5]. Because of these differences, barking is louder at close range compared to howling. Also, wolf barks are different from dog barks: the wolf barks significantly less frequently, at a much lower pitch than the dog [2]. Finally, the context of barking behavior is also different – wolves only bark when they feel threatened (defending territory or exhibiting dominance) whereas dogs bark reactively to many different triggers [1-6]. The differences between the way dogs vocalize compared to wolves can be partially explained by the process of domestication, which will be discussed later.

Barking and Howling in Wolves

In general, vocalization is biologically influenced by body weight and age – a large and mature wolf will have a longer howl with a lower pitch compared to a smaller juvenile [4]. The same is true for barking; although wolves bark infrequently, the tone and pitch will vary based on the individual [4]. In terms of howling, the wolf developed this vocalization for long-distance communication, territorial signals, and to foster relationships within the pack. The primary reason that a wolf howls is for location-signaling, with the purpose of finding pack members or calling the whole pack to its location [3]. 

Howling is one of the most prominent and frequent behaviors performed by the wolf species. It is used by wolves specifically because it is the vocalization that travels the farthest with the least distortion, enabling an individual to quickly locate the source of the howl. Howls are unique to the individual, which allows wolves to locate specific members [3]. Wolves also howl to maintain their territorial boundaries. If a wolf discovers another pack trespassing, it will assess the situation and either silently avoid the other individual or begin howling to call members of its own pack [3]. Finally, howling behavior is linked to temporal changes – wolves are apt to howl both night and day, but they are specifically more vocal between July-October, when packs are raising pups [7]. These months are also the period of least aggression between different packs, hence howling will be used for the purpose of locating individuals rather than calling the pack to a single location[7-8].

Barking and Howling in Dogs

Although domestic dogs have a similar range of vocalizations to the wolf, their dependency on humans due to domestication has resulted in neotenized communication behaviors. Neoteny, or hypertrophy, is the retention of juvenile traits of the ancestral species that is observed in a mature individual of the domesticated species. For example, wolf pups are born with short faces that elongate after adolescence, but dog breeds such as bulldogs, pugs, and boxers are bred for short and broad skulls that persist into adulthood (scientifically, this is termed as a “brachycephalic” head shape). This phenomenon extends to behavior: wolves typically perform play behaviors such as fetch when they are pups, yet humans have selected for dogs that play fetch past maturity. Indeed, breeds such as golden retrievers and labradors have been specifically bred to perform this behavior repeatedly. Hypertrophied behavior traits also exist in communication: mature domestic dogs vocalize loudly and frequently for care-giving, most commonly by barking, similarly to how wolf pups call for attention from their mother[9]. 

Studies suggest that domestication favors dogs that bark frequently because this vocalization is loud, repetitive, directional, and meant for those in close proximity. For example, Faragó, Townsend, & Range (2014) demonstrated that the shift from a pack structure to a human-controlled hierarchy resulted in a decrease in howling over time [3]. Unlike wolves, which occupy territories extending hundreds of acres, dogs living in houses don’t need to howl in order to be heard by a human – barking is effective at close range. Howling is primarily used by wolves because it is nondirectional and doesn’t require a response – barking is very directional and thus dogs imply a response from humans. Dogs also use barking for attention because the short/loud sound is more recognizable by humans. Humans are generally able to distinguish a dog’s mood by the sound of their bark – they are able to distinguish between aggressive, happy, fearful, and stressed barks in context [3]. Similar to wolves’ understanding of each other’s howls, humans listen to frequency, tonality, and rhythm to recognize a dog’s inner state of being [3]. As humans became the dog’s primary caregiver, their behavior shifted to communicating via barking, while wolves howl to communicate with other wolves. Another domestication-related reason for the prevalence of barking in dogs is the change in selective pressure. While the wolf is subject to the pressures of nature, dogs are subject to selection by humans. Barking is a hypertrophied trait – wolf pups and adult dogs bark – but excessive noise is detrimental to the survivability of an adult wolf. Since humans protect and feed dogs, there is no pressure to be silent in order to hide from predators or stalk prey [1]. Dogs are also more vocal than wolves because it makes them more trainable. Being loud and unable to survive in the wild creates a dependency on humans for food, which allows humans to leverage dog behavior. 

Another factor in canine vocalization is the social environment. A study in 2012 found that free-ranging or feral dogs living in packs bark only in context-specific situations (like wolves) while indoor dogs tend to bark at things indiscriminately [5]. In this case, the immediate environment is another factor in vocalization – as long as dogs are housed indoors they will bark more often than they howl. Another study found that when placed in unfamiliar environments, domesticated dogs made frequent contact-seeking sounds towards the experimenter, whereas wolves went completely silent. This is further evidence for genetic manipulation by humans for traits that would normally be selected against in the wild [1].

Dogs and Society

Barking allows humans to better understand dogs, and it may have been initially beneficial to the human caregiver that dogs bark louder and more continuously than wolves [9]. However, barking is also the noisiest vocalization and dogs have been genetically bred to bark excessively with little stimuli needed for continuous noise. This presents consequences in the modern-day for dogs living alongside humans in cities.

 Not all dogs are the same, and noisy barking tendencies may change based on the individual’s breed, size, or temperament, but the instinctual reasons for vocalization remain the same. Some breeds that are relatively quiet include Basenjis, Shar-Peis, and Chow-Chows [2]. In contrast, hound breeds are noisy because they were bred to vocalize – Transylvanian, Basset, Finnish hounds each perform a specific type of bark for hunting purposes [2]. These facts should be kept in mind for those looking to purchase or adopt a dog. Other causes of loud vocalizations in dogs are reactive howling and separation anxiety. It is relatively common for dogs to howl in response to high-pitched tones such as alarms, doorbells and loud/upbeat music, which are inaudible to humans but detectable by dogs [10]. Thus, it would be prudent for potential dog-owners to take into account where they are living when considering their ideal companion – if they live near a busy highway, a fire department/police station, a neighbor who plays percussion instruments, etc. Meanwhile, separation anxiety is termed by the American Kennel Club (AKC) as “when your dog exhibits extreme stress from the time you leave him alone until you return;” it includes behaviors such as destroying items, pacing, urinating/defecating, and vocalizing loudly [11]. Separation anxiety can be alleviated through retraining, which is elaborated on in the next paragraph. 

Given that the primary function of barking is to catch human attention, the most effective solution to barking is counterconditioning and desensitization. Counterconditioning is conditioning against a specific behavior while desensitization is the act of exposing an individual to a trigger until they do not react to it anymore [12]. These techniques were primarily developed to combat separation anxiety, which is characterized by destructive behaviors, restlessness, and persistent/continuous vocalization. Desensitization is also used to train reactive dogs, dogs that either run towards a stimulus or bark incessantly at it. In the case of stimuli-triggered vocalizations, desensitization would involve exposing the individual to increasing amounts of the trigger, which increases the dog’s threshold to react until the dog no longer reacts to the stimuli (stops howling/barking). For counterconditioning, the ASPCA advises the following steps to resolve excessive barking issues: (1) ignore noise, reward dog for being quiet, (2) treat attention as a type of reward (3) train “speak” and “hush” commands, and (4) spend time with dog, as they may be barking because they have not been exercised enough or because their social needs have not been met [12]. By rewarding silence and/or increasing tolerance, human owners can “counter” the domestic dog’s instinct to be noisy. 

In summary, human selection has led to dogs developing louder and more dependent vocalizations compared to their ancestral counterparts. This tendency towards loud attention-seeking enables dog owners to better understand their pet’s needs, but it can also backfire when humans and dogs live in close range. The balance lies between every dog’s genetic predisposition to be loud and the human caretaker’s responsibility to adequately discipline their pet.

For wolves living in the wild, howling is used for long-distance communication, territory signaling, and to foster bonds between individuals in the same pack .

 

References:

  1. Cohen, J. A., Fox, M. W. 2002. Behavioural Processes: Vocalizations in Wild Canids and Possible Effects of Domestication. Elsevier [Internet]. 1976 Jul [cited 2022 Feb 25]; 1(1):77-92. Available from: https://www.sciencedirect.com/science/article/pii/0376635776900085 doi: 10.1016/0376-6357(76)90008-5
  2. Pongracz, Molnar, C., & Miklosi, Ad. 2010. Barking in Family Dogs: an Ethological Approach. The Veterinary Journal (1997) [Internet]. 183(2):141–147. Available from: https://www.sciencedirect.com/science/article/abs/pii/S1090023308004437?via%3Dihub doi: 10.1016/j.tvjl.2008.12.010
  3. Faragó T., Townsend S., Range F. 2013. The Information Content of Wolf (and Dog) Social Communication. Biocommunication of Animals. 9789400774148(1):41-62. Available from: https://link.springer.com/chapter/10.1007/978-94-007-7414-8_4 doi: 10.1007/978-94-007-7414-8_4
  4. Harrington FH, Mech LD. 1979. Wolf Howling and its Role in Territory Maintenance. Behavior [Internet]. 68:(3-4)207–249. Available from: https://www.jstor.org/stable/4533952 doi: 10.1163/156853979X00322 
  5. Spotte, S. 2012. Societies of Wolves and Free-Ranging Dogs. Cambridge University Press. Available from: https://ebookcentral.proquest.com/lib/ucdavis/detail.action?docID=866903&pq-origsite=primo 
  6. Tembrock G. 1976. Canid Vocalizations. Behavioural processes [Internet]. 1(1):57–75. Available from: https://linkinghub.elsevier.com/retrieve/pii/0376635776900073 doi:10.1016/0376-6357(76)90007-3
  7. Nowak S, Jędrzejewski W, Schmidt K et al. 2006. Howling Activity of Free-Ranging Wolves (Canis lupus) in the Białowieża Primeval Forest and the Western Beskidy Mountains (Poland). Journal of Ethology [Internet]. 25(3):231–237. Available from: https://link.springer.com/article/10.1007/s10164-006-0015-y  doi: 10.1007/s10164-006-0015-y
  8. Smith, D.W., M.C. Metz, K.A. Cassidy, E.E. Stahler, R.T. McIntyre, E.S. Almberg, and D.R. Stahler. 2015. Infanticide in Wolves: Seasonality of Mortality and Attacks at Dens Supports Evolution of Territoriality. Journal of Mammalogy. 96(6):1174-1183. Available from: https://www.jstor.org/stable/26372994?sid=primo&seq=1 doi:10.1093/jmammal/gyv125
  9. Mech, & Boitani, L. 2003. Wolves: Behavior, Ecology, and Conservation. University of Chicago Press.
  10. National Geographic. 2015. Calls of the Wild – Why do Animals (Including Your Dog) howl? 2021 May [Accessed February 25, 2022]. Available from: https://www.nationalgeographic.com/animals/article/150425-animals-behavior-wolves-dogs-howling-science
  11. Gibeault, Stephanie, M. S. 2021.  Dealing with Separation Anxiety in Dogs & Puppies. American Kennel Club [Internet]. 2021 Aug [cited 2022 Apr 26]. Available from: https://www.akc.org/expert-advice/training/dog-separation-anxiety-how-to-stop/
  12. Howling. ASPCA. (n.d.). Accessed February 25, 2022. Available from: https://www.aspca.org/pet-care/dog-care/common-dog-behavior-issues/howling

Smoking Cigarettes as a Potential Mechanism in Developing Alzheimer’s Disease

By Barry Nguyen, Biochemistry and Molecular Biology ‘23 & Vincent Tran Neurobiology, Physiology, and Behavior ‘23

Authors Note: During my study abroad in South Korea, I was taken back by the number of people smoking cigarettes in the streets. As a country that valued health and beauty, I was surprised by the frequent sights of civilians smoking cigarettes. I then realized that not many people are aware of the cognitive effects cigarettes may induce. We both wrote this review in hopes of spreading awareness of the link that many are not cognizant of.

 

Introduction

Alzheimer’s Disease (AD) is an irreversible neurodegenerative disease that is characterized by neuronal loss, memory impairment, and cognitive dysfunction (Wallin et al. 2017). It is the main form of dementia in the aging population and it has been projected to quadruple in the coming century. AD pathogenesis is caused by a variety of factors–smoking cigarettes being a very common environmental factor in today’s society. Cigarettes contain thousands of organic compounds that have the capacity to induce adverse health effects (Wallin 2017). Epidemiological studies strongly show cigarette smoking as an important risk factor in AD (Yuen-Shan Ho et al. 2012). Smoking cigarettes not only doubles the risk for developing cognitive disorders, but it also accelerates the rate of cognitive decline. Approximately 2 billion people worldwide use tobacco products, mostly in the form of cigarettes (Durazzo et al. 2014) and given the current projection of AD, it is becoming increasingly paramount to further investigate the relationship between smoking cigarettes and cognitive decline. In this review, we delineate 3 strongly supported mechanisms of tobacco use that manifest and contribute to AD pathogenesis, as well as discuss the societal factors that underlie individual differences in severity of AD symptoms. 

Smoking-Associated Neurological Pathologies

Increased Cerebral Oxidative Stress 

Maintaining the biochemical integrity of the brain is essential for its normal functioning. Oxidative stress (OxS), in addition to its capacity to onset various vascular pathologies (Chavez et. al, 2007), may also impair cerebral biochemistry (Salim 2017). Oxidative stress (OxS) is a phenomenon that is caused by free radicals, or chemical entities with an unpaired electron. Free radicals are common by-products of metabolic processes, and among which are reactive oxygen species (ROS) that have undergone a one electron reduction (Chavez et. al, 2007). When an excess amount of (ROS), reactive nitrogen species (RNS), and other oxidizing agents are produced (Durazzo et. al 2014), and the antioxidant system is unable to keep up with such radical formation, oxidative stress occurs (Salim 2017). In general, the resulting physiological pathologies include modification of biomolecules, which results in defective cellular signaling and accumulation of malfunctioned proteins. With this cascade of negative events induced by OxS, the imbalance between production and detoxification of ROS can have serious consequences on many of our physiological systems, and in particular, the brain.

Cigarettes are composed of approximately 5,000 combustion products and contain high concentrations of free radical species (Durazzo et. al 2017). Smoking inhibits the synthesis of antioxidant species, thereby propagating free radical formation and the consequences of oxidative stress. The human brain, in particular, is vulnerable to OxS damage due to its high metabolism and relatively low antioxidant enzymes. With this said, smoking cigarettes may have profound consequences on the brain. 

To detect oxidative stress in the brain and its differential manifestations among smokers and nonsmokers, scientists divided a group of 9 male rats into two groups: four in the control and five in the cigarette smoking group (Ho et. al 2012). The rats in one group were exposed to sham air, or “clean air” to serve as the control while the rats in the experimental group were exposed to cigarette smoke for one hour a day for 56 days. After 56 days, the rats were euthanized and brain tissue was retrieved. 

Figure 1. Anti-8-OHG antibody was used to visualize the levels of oxidative stress present in the hippocampus (Ho et. al 2012).

 

To detect levels of oxidative stress, researchers used anti-8-OHG, an antibody that stains biomarkers of oxidative stress and looked at regions in the brain responsible for memory. 8-hydroxy-2 deoxyguanosine (8 OHDG) and 8-hydroxyguanosine (8-OHG) are OxS biomarkers generated when guanine, a nucleotide present in DNA and RNA, is oxidized by reactive free radicals. In the IHC stainings, a higher fluorescence was observed among the experimental group as compared to the control, suggesting smoking cigarettes as a potential inducer in cerebral oxidative stress. Rats exposed to smoking show a significant immunoreactivity of 8-OHG in the dentate gyrus, which is a region in the brain responsible for memory (Fig. 1a and b), and CA3 (Fig. 1C and D) as compared to rats in the control group. The immunoreactivity of 8-OHG in the experimental group suggests that smoking does indeed induce oxidative stress by oxidizing the guanine nucleotide present in the DNA.

Decreased Expression of Synaptic Proteins

Synapsins are synaptic proteins that are essential for the normal functioning of the brain (Yuen-Shan Ho 2012). Synapsins belong to a family of phosphoproteins and are important for synapse development, neurotransmitter regulation, and nerve terminal formation (Mirza 2018). Current Alzheimer’s literature reveals a substantial involvement of malfunctioned synapsins in the development of AD. Namely, synaptic loss in the neocortex and limbic system, both regions important for higher order functions such as emotional responses, cognition, and spatial reasoning, may be responsible for the cognitive alterations in Alzheimer’s patients. Additionally, disturbances in synapsin homeostasis have revealed cognitive deficits and defective neurotransmitter transmission in Alzheimer’s patients (Lin et. al 2014). 

In the same experiment investigating cigarettes’ capacity to induce neurological dysfunction, scientists observed a decreased expression of Synapsin 1, one of three isoforms of the protein (Ho et al. 2012).  Using immunohistochemistry, the smoking group showed a reduction of fluorescent intensity, inferring decreased expression of Synapsin 1. These results further bolster the capacity of tobacco use in its contribution to AD pathogenesis due to the significance reduction in the Synapsin 1 protein as observed in the results. Furthermore, cigarettes’ ability to induce small scale pathological changes in the brain suggests its domino-like effect on cognitive function.

Figure 2. Immunohistochemical staining reveals significant reduction in Synapsin 1 protein between the control and smoking group (Ho et al. 2012). 

 

Abnormal Phosphorylation of Tau Proteins

Tau pathology, a hallmark of Alzheimer Disease pathology, manifests due to the abnormal phosphorylation of Tau protein (Neddens 2018). This hyperphosphorylation of Tau results in Tau aggregation and are collectively known as neurofibrillary tangles (NFT), a histopathological marker for AD (Miao 2019). Oxidative stress in particular has a capacity to promote Tau pathology due to its fatty acid product which provides a direct link to mechanisms that induce NFT formation (Liu 2015). The mechanism in which OxS plays in the phosphorylation of Tau and subsequent aggregation is dependent on the type of oxidant and the specific amino acid sequences involved (Federica et. al 2019). For example, oxidation of cysteine residues have been observed to be involved in Tau aggregation, suggesting the phenomenon to be a disulfide bond mediated process.

Evidence linking oxidative stress and Tau hyperphosphorylation can be supported in an experiment utilizing Buthionine Sulfoximine treatment, which induces oxidative stress in M17 neuroblastoma cells (cancers of nerve cells) by inhibiting the synthesis of glutathione, a chemical important in the maintenance of the ROS equilibrium. Researchers were able to link HO-1, an oxidative stress biomarker, with an increase in hyperphosphorylation of PHF-1 sites. PHF sites, or paired helical filaments, are the structural constituents of neurofibrillary tangles, a pathological hallmark of AD. Taken together, smoking related OxS may serve as a fundamental mechanism in the pathogenesis of AD and indirectly influence AD pathogenesis by propagating the formation of NFT (Durazzo et. al 2014). 

Possible Determinants of AD Differential Manifestations 

As devastating as Alzheimer’s disease can be, the extent of its harm varies across a wide spectrum, and some people face greater damage or faster onset than others. Such variations in Alzheimer’s effects might be linked to not just biological but also environmental factors. As such, societal differences in the population can underpin the impact that various effects have on patients’ lifestyle and functioning with Alzheimer’s. This idea that Alzheimer’s effects are dependent on an individual basis is centered around an individual’s reserve against Alzheimer’s and other forms of dementia. Reserve against the effects of brain damage refers to the potential to alleviate dementia symptoms and progression and is further categorized into brain reserve and cognitive reserve. 

Brain Reserve and Educational Attainment’s Connection to AD

Brain reserve describes how a larger amount of brain mass could offset the amount of damage that it would take for brain function to start being impaired. As patients afflicted by neurodegenerative diseases like Alzheimer’s could potentially lose more neurons and synapses before onset of clinical symptoms, those with larger brains could have better mitigation against symptoms of dementia (Bartrés-Faz et. al, 2011). With smoking linked to Alzheimer’s development, smokers could likely be diminishing the brain mass that would be buffering the rate at which neurological decay leads to impairment of memory and everyday functions.

Variations in individuals’ brain reserve could be associated with and predicted by individuals’ lifetime educational attainment. As such, a study using structural MRI analysis compared regional cortical thickness among a sample of individuals with different educational attainment. Those with more years of education were found to have larger regional cortical thickness, which was used to compare differences in brain size (Liu et al., 2012). This positive correlation between education level and cortical thickness demonstrates the positive impact of further education on development of more brain reserve.

Cognitive Reserve and Educational Attainment’s Connection to AD

Cognitive reserve is the concept that differences in learned cognitive processes can help the brain compensate for damage or dysfunction by relying on different functional approaches. As cognition for each individual relies on different recruitment patterns of neurons and synapses, individuals with more extensive neural networks would be more likely to compensate for the loss of neurons in a network vital for specific cognitive tasks (Bartrés-Faz et. al, 2011). Therefore, the same amount of brain damage to two individuals with similar brain sizes and physiologies could potentially result in differing effects in their functioning, due to differences in cognitive reserve. 

Furthermore, differences in cognitive reserves can be attributed to lifetime educational or occupational levels, with epidemiological studies showing that individuals with less than 8 years of education have a significantly higher chance of dementia (Stern, 2012). 

Access to educational opportunities has been unequally distributed across socioeconomic lines, with higher education’s high costs making it significantly more accessible to the middle and upper classes. Furthermore, there has also been a disparity in the racial distribution of Alzheimer’s in the U.S., with African Americans having the highest prevalence of AD followed by Hispanic Americans. 

With smoking already presented as a risk factor for Alzheimer’s, its risk is compounded by the lessened reserve that is associated with education. Demographic studies demonstrated an inverse association between chances of smoking and educational attainment, with those who have less years of education being more prone to starting (Maralani, 2013). With those who smoke likely to have less education, such individuals would increase physiological risks of acquiring AD while also being more susceptible to developing AD at an earlier time. 

Conclusion

Although smoking has been known to be the root perpetrator in a multitude of health risks and diseases, its effect on neurological health warrants increased scientific and public attention. As Alzheimer’s remains without a definite cure, current treatments revolve around managing symptoms and prevention of risk factors. With how smoking involves increased cerebral oxidative stress, decreased expression of synaptic proteins, and abnormal phosphorylation of Tau proteins, recent findings reiterate the necessity of avoiding smoking cigarettes to minimize further risks of developing AD. In the case that Alzheimer’s does develop in individuals, they can have a higher quality of life living with symptoms depending on their educational history, reiterating society’s need for better access to education.

Smoking cigarettes can produce 3 substantive effects that may contribute to the pathogenesis of AD: affecting synaptic proteins, increasing oxidative stress, and raising levels of hyperphosphorylated Tau protein.  

 

References:

  1. Bartrés-Faz, D., & Arenaza-Urquijo, E. M. (2011). Structural and functional imaging correlates of cognitive and brain reserve hypotheses in healthy and pathological aging. Brain Topography, 24(3-4), 340–357. 
  2. Durazzo, T., Mattsson, N., & Weiner, M. 2014 Smoking and Increased Alzheimer’s Disease Risk: A Review of Potential Mechanisms. Trauma Spectrum Disorder and Health Behavior 10:122-145
  3. Durazzo, T., Korecha, M., Trojanowski, J., Weiner, M., O’Hara, R., Ashford, J., & Shaw., L Active Cigarette Smoking in Cognitively-Normal Elders and Probable Alzheimer’s Disease Is Associated with Elevated Cerebrospinal Fluid Oxidative Stress Biomarkers. Journal of Alzheimer’s Disease 54: 99-107.
  4. Chavez, J., Cano, C., Souki, A., Bermudez, V., Medina, M., Ciszek, A., Amell, A., Vargas, M., Reyna, N., Toledo, A., Cano, ., Suarez, G., Contreras, F., Israili, Z., Hernandez-Hernandez, R., & Valasco. M. Effect of Cigarette Smoking on the Oxidant Antioxidant Balance in Health Subjects. American Journal of Therapeutics 14: 189-193.
  5. Ho, Y., Yang, X.,Yeung, S., Chiu, K., Lau, C., Tsang, A., Mak, J., & Chang, R. Cigarette Smoking Accelerated Brain Aging and Induced Pre-Alzheimer-Like Neuropathology in Rats. PLos ONE:7
  6. Lin. L., Yang, S., Chu, J., Wang, L., Ning, L., Zhang, T., Jiang, Q., Tian, Q., & Wang, J. Region-Specific Expression of Tau, Amyloid-B Protein Precursor, and Synaptic Proteins at Physiological condition or Under Endoplasmic Reticulum Stress in Rats. Journal of Alzheimer’s Disease 41:1149-1163
  7. Liu, Y., Julkunen, V., Paajanen, T., Westman, E., Wahlund, L.-O., Aitken, A., Sobow, T., Mecocci, P., Tsolaki, M., Vellas, B., Muehlboeck, S., Spenger, C., Lovestone, S., Simmons, A., & Soininen, H. (2012). Education increases reserve against Alzheimer’s disease—evidence from structural MRI analysis. Neuroradiology, 54(9), 929–938. 
  8. Maralani, V. (2013). Educational inequalities in smoking: The role of initiation versus quitting. Social Science & Medicine, 84, 129–137. 
  9. Miao, J. Shi, R., Li, L., Chen, F., Zhou, Y., Tung, Y., Hu, W., Gong, C., Iqbal, K., & Fei, L. Pathological Tau From Alzheimer’s Brain Induces Site-Specific Hyperphosphorylation and SDS-and Reducing Agent-Resistant Aggregation of Tau in vivo. Frontiers in Aging Neuroscience 11: 34.
  10. Mirza, F. & S. Zahid. The Role of Synapsins in Neurological Disorders. Neuroscience Bulletin 34:349-358.
  11. Neddens, J., Temmel, M., Flunkert, S., Kerschbaumer, B., Hoeller, C., Loeffler, T., Niederkofler, V, Daum, G., Attems, J., & Paier, B. Phosphorylation of Different Tau Sites During Progression of Alzheimer’s Disease. Acta Neuropathol Commun 6.
  12. Salim, S. Oxidative Stress and the Central Nervous System. Journal of Pharmacology and Experimental Therapeutics 360:201205.
  13. Stern, Y. (2012). Cognitive Reserve in aging and Alzheimer’s disease. The Lancet Neurology, 11(11), 1006–1012.
  14. Wallin, C., Sholts, S., Osterlund, N., Luo, J., Jarvet, J., Roos, P., Llag, L., Graslund, A. & Warmlander, S. Alzheimer’s disease and cigarette smoke components: effects of nicotine, PAHs, and Cd(II), Cr(III), Pb(II), Pb(IV) ions on amyloid-β peptide aggregation. Scientific Reports 7.

The Heart of the Matter

By La Rissa Vasquez, Neurobiology, Physiology, and Behavior ‘23 and Shaina Eagle, Global Disease Biology ‘24

 

In 1818, Mary Shelly published what is now regarded as the pioneer of the science fiction genre, the story of Frankenstein. In this novel, an ambitious scientist named Dr. Victor Frankenstein challenges the laws of nature by bringing the dead back to life. Sewn from animal and human remains, the “Creature” was sentient and desired love and acceptance like all humans but would later be known only as the infamous monster of Frankenstein. But these are no longer just stories. In recent decades, scientists have made huge strides in transplantation technology, even experimenting with the transplantation of non-human organs, redefining the laws of nature.

On January 7, 2022, David Bennett became the first person to successfully undergo the transplant of a non-human organ, or xenotransplantation. Bennett was suffering from end-stage heart disease and was ineligible for a human heart transplant, but the U.S. Food and Drug Administration granted a compassionate use for the experimental transplant of a genetically modified porcine heart [1]. The surgery was performed by Bartley Griffith, M.D. at the University of Maryland Medical Center in Baltimore [1]. 

The shortage of organs available for transplantation is an ongoing problem, especially for patients like Bennett in end-stage organ failure, for which transplantation is often the only option. For decades, medical geneticists and surgeons have worked to make xenotransplantation a reality. The Transplant Wait List has over 100,000 people on it in the United States alone [2]. As surgical technology and the understanding of genetics have advanced, so too has the number of patients in need of organ transplants. 

‘Porkensteen:’ Bennett’s New Heart

David Bennett’s porcine heart came from Revivicor, a United Therapeutics Corporation. On their website, the organ is advertised as “UHeart ™.” Also in the United Therapeutics pipeline are xenokidneys and xenolung lobes, which are both designed to target end-stage organ failure, and are in the same pre-clinical stage as the porcine heart used in the Maryland surgery.

The heart transplanted into Bennett is not the exact heart removed straight from a pig. Revivicor altered ten genes, knocking out a few pig genes and incorporating human genes to prevent rejection by the patient’s immune system and to prevent the heart tissue from growing excessively large inside Bennet’s chest. In a collaboration between the University of Maryland School of Medicine’s cardiac xenotransplantation program and Kiniksa Pharmaceuticals, Bennet was also given KPL-404, an experimental immunosuppressant used to prevent immune-rejection of the organ by suppressing T-cell-dependent Antibody Response [3]. 

Genetic modifications and immunosuppressive therapies are integral to the success and scaling of xenotransplantation. After decades of experimentation with non-human primates, the domestic pig was identified as the ideal donor, due in part to the similarity in size and physiology between pig and human organs, as well as the reduced risk of zoonotic disease transmission [4]. However, genetic differences between pigs and humans serve as a complication due to an increase in the likelihood of rejection by the human immune system. Bennett’s medical team was initially able to avoid concerns such as hyperacute rejection and coagulation system dysregulation, in which the recipient’s own antibodies attack the foreign organ. 

Unfortunately, on March 8, 2022, it was announced that Bennett had died [5]. The University of Maryland Medical Center has not yet announced an official cause of death but plans to publish a full clinical study in the future. Despite the outcome, Bennett lived with the transplanted organ for more than three months post-procedure. His surgery is another step in a long line of recent breakthroughs in xenotransplantation [6] and will guide researchers in their quest for sustainable and effective porcine organ transplantation in the future.

Reanimating the past 

The first known xenotransplantation was the blood of a lamb into a 15-year-old French boy in 1667. Nearly two hundred years later, French physician Paul Bert warned against cross-species transplantation in “On Animal Transplantation” [7]. Cases of xenotransplantation picked up speed in the early 1900s, using organs from various species, but all ended in the death of the recipient. The first pig organ transplant on record was in Lyon, France in 1905 by surgeon Mathieu Jaboulay, using a porcine kidney; the patient only survived for three days [7]. A surgeon attempted the transplantation of a porcine heart for the first time in 1968 in London; this time, the patient only survived post-procedure for four minutes [7]. 

A number of attempts at xenotransplantation have been made throughout the decades, varying in levels of success, and reflecting the improvements in allotransplantation (transplantation between the same species) as well as immunosuppressive and gene-editing technology. Gene modification technology such as clustered regularly interspaced short palindromic repeat (CRISPR-Cas9) allows scientists to genetically modify genes so that transplanted organs are less likely to be immunologically rejected [4, 7]. One setback came with the discovery of porcine endogenous retroviruses (PERVs) in 1994, but in 1998, the FDA allowed porcine transplants to resume under strict guidelines after it was shown that PERV infections could be detected in recipients. In 1999, they banned the use of primate organs in xenotransplantation because the risk of infectious disease was too high [8]. The beginning of the twenty-first century saw a number of trials of pig to non-human primate transplantations, and just four months before Bennett’s surgery, surgeons in New York City transmitted genetically modified porcine kidneys to brain-dead recipients [6]. Although the recipients were being sustained on ventilators, the fact that the organs were not rejected was a milestone.

When Pigs Fly: The Future of Xenotransplantation 

Ancient Greek mythology tells the story of Daedalus attaching the wings of a bird onto his son Icarus’ back, in an attempt to escape the island of Crete. Icarus falls to his death after boldly flying too close to the sun and melting his wings off: his fatal flaw was unfettered pride and ambition. 

Will our pension for progress and self-congratulation in the wake of our discoveries be our downfall? An often overlooked part of the story of Icarus are the instructions that Daedelus gave to his son before their flight: “fly too low and the sea will dampen and clog the wings.” Ambition can surely lead to disappointment but so can complacency. If medical and scientific technology were not allowed to advance then we would drown in a sea of ignorance. To keep from drowning, we use our current understanding to build a raft and we preserve ethical quandaries instead of boundaries to survive the turbulent seas and ride the tides of progress. 

When we think about the future of xenotransplantation, we should be excited about the possibilities of this new application. CRISPR-Cas9 allows scientists to precisely modify genes— resolving many immunological concerns while producing viable animal subjects within short periods of time; this is promising for the scaling of xenotransplantations. These engineered pigs carry fewer xenoantigens (an antigen that is found in more than one species) reducing the risk of organ rejection or the development of fatal xenozoonosis (an infectious disease that is transferred from animal to human via the transplanted organ). Reducing the risk of organ rejection caused by zoonotic diseases is paramount to the success of xenotransplantation in all human organ systems. 

Xenografted porcine fetal neurons are a promising treatment for Parkinson’s disease and Huntington’s disease, in addition to biologically engineered organs grown in vitro and 3D printed organs [13]. These medical applications could shorten transplant waitlists and help those who are ineligible to receive an organ due to other illnesses [9]. Patients who are older and in a late stage of disease, like David Bennett, are also less likely to be given priority on a waitlist [10]. Xenotransplantation allows scientists to create and edit the tissue of a working animal model and tailor it to a patient’s distinct genetic disposition in abundance. 

Too Close to the Sun: Ethical Concerns

“Fly too high and the sun will melt your wings:” xenotransplantation is an ambitious operation but it is not unfettered. The existence of ethics in science is not a dilemma but a framework for us to navigate the horizon of change for animal rights, the welfare of patients, and religious exemptions. The genetic engineering and subsequent raising of pigs within sterile lab conditions to prevent disease for the sole purpose of organ harvesting comes at a great cost to the animals’ welfare [11]. There are also religious considerations that could further stigmatize the practice of xenotransplantation because of the premise of mixing the human with the non-human which is often seen as a taboo [12]. 

The heart as an organ has philosophical and physiological definitions, but the heart as a cultural and societal symbol has carried inexplicable and global significance since ancient times. So what does it truly mean to be human? Even after being composed of dead human and animal flesh, Frankenstein’s creature still had a heart. He felt love and sorrow like any human. He was an abandoned creation who became a monster because those around him lacked the ability to show him compassion. What makes us human is our ability to adapt, advance, and most importantly, our ability to show empathy. The topic of xenotransplantation requires just as much an open mind as it does an open heart to help make the treatment more accessible to others. 

Stories and fables are woven into our morality. They can help explain why we fear change at the risk of uncertainty and chase after discovery at the prospect of reward. In both tales, Dr. Frankenstein and Icarus are warned not to take pride in their intelligence because knowledge is a power equivalent to the gods. But within the realm of science and society, knowledge is not a deity or a harbinger but a vital part of our survival. As a species, we are obligated to share knowledge when it can save lives. And as humans, survival is ingrained in our biology and consciousness. It is etched in our history, pursued in our present, and foreseen in our futures. Xenotransplantation as a medical practice to save a person’s life is not inhuman nor is it hubris, but to deny ourselves a known resource in the ongoing odyssey of survival would be monstrously heartless. 

 

References:

  1. In first surgery of its kind, Maryland man receives heart transplanted from genetically modified pig. Washington Post. [accessed 2022 Apr 26]. https://www.washingtonpost.com/science/2022/01/11/pig-heart-transplant-genetically-modified/.
  2. Organ Donation Statistics | organdonor.gov. [accessed 2022b Apr 26]. https://www.organdonor.gov/learn/organ-donation-statistics.
  3. Kiniksa Announces Positive Final Data from Phase 1 Trial of KPL-404 | Kiniksa Pharmaceuticals. [accessed 2022b Apr 26]. https://investors.kiniksa.com/news-releases/news-release-details/kiniksa-announces-positive-final-data-phase-1-trial-kpl-404/.
  4. Ryczek N, Hryhorowicz M, Zeyland J, Lipiński D, Słomski R. 2021. CRISPR/Cas Technology in Pig-to-Human Xenotransplantation Research. Int J Mol Sci. 22(6):3196. doi:10.3390/ijms22063196. [accessed 2022 Apr 26]. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8004187/.
  5. Rabin RC. 2022 Mar 9. Patient in Groundbreaking Heart Transplant Dies. The New York Times. [accessed 2022 Apr 26]. https://www.nytimes.com/2022/03/09/health/heart-transplant-pig-bennett.html.
  6. Thompson J. Pig Kidneys Transplanted to Human in Milestone Experiment. Scientific American. [accessed 2022 Apr 26]. https://www.scientificamerican.com/article/pig-kidneys-transplanted-to-human-in-milestone-experiment/.
  7. Siems C, Huddleston S, John R. 2022. A Brief History of Xenotransplantation. The Annals of Thoracic Surgery. 113(3):706–710. doi:10.1016/j.athoracsur.2022.01.005. [accessed 2022 Apr 26]. https://www.sciencedirect.com/science/article/pii/S0003497522000716.
  8. Fishman JA. 2018. Infectious disease risks in xenotransplantation. Am J Transplant. 18(8):1857–1864. doi:10.1111/ajt.14725. [accessed 2022 Apr 26]. https://onlinelibrary.wiley.com/doi/10.1111/ajt.14725.
  9. What Disqualifies You for a Liver Transplant? MedicineNet. [accessed 2022 Apr 26]. https://www.medicinenet.com/what_disqualifies_you_for_a_liver_transplant/article.htm.
  10. Sade RM, Mukherjee R. 2022. Ethical Issues in Xenotransplantation: The First Pig-to-Human Heart Transplant. The Annals of Thoracic Surgery. 113(3):712–714. doi:10.1016/j.athoracsur.2022.01.006. [accessed 2022 Apr 26]. https://www.annalsthoracicsurgery.org/article/S0003-4975(22)00072-8/fulltext.
  11. Rollin BE. 2020. Ethical and Societal Issues Occasioned by Xenotransplantation. Animals (Basel). 10(9):1695. doi:10.3390/ani10091695. [accessed 2022 Apr 26]. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7552641/.
  12. Derenge S, Rossman Bartucci M. 1999. Issues Surrounding Xenotransplantation. AORN Journal. 70(3):428–432. doi:10.1016/S0001-2092(06)62324-7. [accessed 2022 Apr 26]. https://www.sciencedirect.com/science/article/pii/S0001209206623247.
  13. Fink JS, Schumacher JM, Ellias SL, et al. Porcine xenografts in Parkinson’s disease and Huntington’s disease patients: preliminary results. Cell Transplant. 2000;9(2):273-278. doi:10.1177/096368970000900212. [accessed 2022 Apr 26]. https://pubmed.ncbi.nlm.nih.gov/10811399

What do scaling laws tell us about the biochemistry of life beyond Earth?

By Tammie Tam, Molecular and Medical Microbiology ‘22

 

Humanity has always been intrigued with the possibility of life existing elsewhere in the universe. In 1977, NASA attached the Golden Record, a detailed account of humanity and Earth, onto the Voyager 1 and 2 space probes [1]. They intended for the records to be a way to share human knowledge with intelligent life forms that may stumble upon the space probes in interstellar space. While the Voyager 1 and 2 entered interstellar space in 2012 and 2018, they have yet to encounter any intelligent life forms capable of deciphering the record [2]. But extraterrestrial life doesn’t have to be “intelligent.” Life can be as simple as a swimming bacteria. 

With the broad possibility of what life can look like, most scientists rely on what they understand about life’s limits on Earth to narrow down possible markers of extraterrestrial life. In 2022, a team led by Sara Walker at Arizona State University defined the types of biochemical functions likely to be found sustaining life throughout the universe [3]. Their work is a step forward in predicting how life may exist in places that look nothing like Earth by identifying potential signs of life using chemistry. 

Walker’s team did this by computing scaling laws, which represent the proportional relationship between two quantifiable variables by some power factor [3]. Often these laws are used to demonstrate the universality of phenomenons found in physics. A simple example is the relationship between the surface area and volume of a cube. Given two cubes of differing sizes, the ratio of the surface areas is equivalent to the ratio of the volumes by a power or scaling exponent of ½ [4]. What makes this a universal principle is that no matter what kind of material the cube is made of, or where the cube is found, the relationship between surface area and volume stays the same. 

In biology, scaling laws are not as commonly applied, but certain biological characteristics have still been shown to be able to scale to each other. For example, scientists have defined scaling laws between body mass and other features like growth rates, metabolic rates, and life spans [5]. These scaling laws in biology are typically defined by power laws: y = axk, where y is the rate of one variable, a is a constant, x is the rate of the other variable, and k is the scaling exponent [5]. Scaling laws in no shape or form provide the mechanism of how one feature regulates another feature and vice versa. Instead, they show that no matter the differences in detail between systems of different organisms, there is an underlying commonality between them all. 

One caveat is that scaling laws for these well-studied biological features like growth and mass may only apply to life forms on Earth. For instance, while the final body mass and maximum growth rate within and across many taxonomic groups generally scale with each other at k = ¾, there’s no reason that this growth scale must remain the same across extraterrestrial life [5]. Because there are many factors that contribute to growth that scaling laws don’t reveal, the growth scale may be influenced by some adaptation to living on Earth. This is evident by the fact that certain taxonomic groups don’t follow the same scaling patterns in terms of growth rate and mass. Thus, scaling laws in biology are limited in their application when examining certain biological features.

However, chemistry throughout the universe is bound by the same laws and would not change whether on Earth or Mars or an asteroid in another galaxy. Walker’s group, therefore, started thinking about how, at its core, life is run by different chemical reactions [3]. Since atoms and chemicals are universal, perhaps the chemistry of life on Earth, or biochemistry, can be generalized to include extraterrestrial life.

The group studied enzymes, which are proteins that help chemical reactions run in the body. They collated databases on all known enzymes in the tree of life. The enzyme’s associated classes, reactions, and components are systematically encoded by a unique Enzyme Commission numerical identifier. They used the ID numbers to group enzymes into classes, such as oxidoreductases, transferases, hydrolases, lyases, isomerases, and ligases. ID numbers also helped them count the unique enzymes in each class [3]. 

By computing scaling laws as a statistical measure, they found that the number of unique enzymes within any particular class increases as the total number of enzymes from all classes increases [3]. This scaling relationship can generally be defined by  y = axk, with k > 0 for all enzyme classes. 

While each enzyme class demonstrated different scaling behaviors, what matters for determining the universality of a biochemical function of  an enzyme class to have the same scaling behavior between all domains of life: archaea, bacteria, and eukaryotes. In their study, they found each enzyme class except for lyases to have scaling exponents within 95% confidence interval between domains, meaning the scaling behavior is similar enough to establish all enzyme classes except for lyases as universal biochemical functions [3]. 

The issue with lyases is resolved when lyases are grouped with hydrolases. Both enzyme classes deal with breaking down molecules. Together, their scaling behaviors between domains of life fall within a 95% confidence interval of each other, meaning lyases may potentially also constitute a universal biochemical function if reclassified with hydrolases [3].

These results reveal how these enzyme classes can inform what biochemical functions allow life to occur regardless of specific chemical components. The group reasoned that since they did not examine at the level of mechanistic details of how enzymes function, this work can be more generalizable beyond the tree of life as we know it [3].

This research will motivate others across various disciplines to propose new ideas to predict features of life outside of the immediate solar system. And until we find evidence of life (or until they find us), there are still many more ways scientists can try to bring some understanding to the unknown.

Figure 1. Here shows a conceptual schematic of how the number of unique enzymes in each enzyme class relate to the total number of enzymes across all enzyme classes, and how these scaling relationships compare between archaea, bacteria, and eukaryotes. The enzyme classes examined are oxidoreductase (EC 1), transferase (EC 2), hydrolase (EC 3), lyase (EC 4), isomerase (EC 5), or ligase (EC 6).

 

References:

  1. Voyager – what’s on the golden record. [accessed 2022c Apr 26]. https://voyager.jpl.nasa.gov/golden-record/whats-on-the-record/.
  2. Voyager – mission status. [accessed 2022b Apr 26]. https://voyager.jpl.nasa.gov/mission/status/.
  3. Gagler DC, Karas B, Kempes CP, Malloy J, Mierzejewski V, Goldman AD, Kim H, Walker SI. 2022. Scaling laws in enzyme function reveal a new kind of biochemical universality. Proc Natl Acad Sci USA. 119(9):e2106655119. doi:10.1073/pnas.2106655119. [accessed 2022 Apr 26]. https://pnas.org/doi/full/10.1073/pnas.2106655119.
  4. Introduction to scaling laws. [accessed 2022a Apr 26]. https://www.av8n.com/physics/scaling.htm.
  5. Hatton IA, Dobson AP, Storch D, Galbraith ED, Loreau M. 2019. Linking scaling laws across eukaryotes. Proc Natl Acad Sci USA. 116(43):21616–21622. doi:10.1073/pnas.1900492116. [accessed 2022 Apr 26]. https://pnas.org/doi/full/10.1073/pnas.1900492116.

650-million year old enzyme used to target cell death in cancer cells

By Vishwanath Prathikanti, Anthropology, ‘23

Author’s note: As someone studying Anthropology at Davis, I often see my friends confused when I tell them how much of my studies consist of biology and chemistry. It’s a fairly common conception that Anthropologists mainly study human culture, and while cultural anthropology is an important aspect of the field, it is still only a part of it. When I heard about how our ancestors’ enzymes are being used to advance our knowledge of cancer, I knew it could be an opportunity to change the perception of Anthropology among students.

 

Most people have a general understanding of how cancer works: it occurs when apoptosis, or cell death, does not occur in cells. These cells start to propagate, and then aggregate into tumors. The tumors can spread across the body and lead to varying health complications depending on if they are benign (isolated to a part of the body) or malignant (spread to other areas). Naturally, one possible solution would be to fix the part in cancer cells that prevent them from properly dying. So how does a cell die?

Apoptosis hinges on enzymes called effector caspases, which deactivate proteins that carry out normal cellular processes, activate nucleases and kinases that are used to break down DNA, and disassemble various components of the cell [1]. So to cause cell death in cancer cells, scientists would need to activate caspases. Activating these caspases would affect all cells, not just cancerous ones. The challenge scientists face is activating caspases in cancer cells without impacting healthy surrounding cells. Unfortunately, to activate effector caspases in just cancerous cells requires an intimate knowledge of the different proteins that comprise the caspase family, something the scientific community lacks.

In an effort to learn more about the structure of caspases, Suman Shrestha and Allan C. Clark from the University of Texas at Arlington decided to look to the past rather than just the present. Specifically, they wanted to analyze the folding mechanisms and structure of effector caspases and construct a picture of how they operated for our ancestors [2]. 

A recent trend in evolutionary biology and physical anthropology has been comparing various proteins and their folding structures across other organisms today and reconstructing what these proteins looked like for our ancestors [3]. This is carried out via a computer program that generates a phylogenetic tree of a protein family, a process known as ancestral sequence reconstruction (ASR). After the phylogeny is generated, the ASR program will statistically infer where certain proteins changed or emerged in the tree [4]. This is done by comparing binding sites in proteins. The program will identify various binding sites that are described as “ambiguous sites,” when a node (branching point in a phylogenetic tree) has a <70% probability of being accurate [5]. In caspases, this ambiguity is generally due to one of two possibilities. One is that there is nearly a 50/50 chance an identified ancestral protein led to the extant version, or another identified protein. The second possibility is that the binding site has a high mutation rate, lowering the probability that it has been characterized correctly [5]. As for the other sites, different ASR programs have slightly different levels of accuracy, but the most prominently used ones have around a 90-93% chance that every non-ambiguous site is accurate [8]. Finally, using protein sequences of the organisms alive today and the phylogeny that depicts their ancestors, the ASR program can present the most likely sequence of the protein at a particular node in the phylogeny [6].

 

Caption: The ASR process will generate the phylogeny (C) as well as the sequences and order of sequences provided those of extant species are provided to the program (D) [4].

 

Using ASR, Shrestha and Clark discovered effector caspases first evolved in a common ancestor more than 650 million years ago (mya) when microorganisms and sponges dominated life. While ASR can’t identify the species of the organism, it can generate the predicted sequences of these ancient caspases. This is all they need to recreate these proteins and better understand how these caspases function under healthy conditions versus cancerous ones [2, 7]. 

Among the 12 proteins that make up the caspase family, Shrestha and Clark decided to reconstruct the ancestor of three specific ones: caspase-3, -6, and -7 [7]. These three caspases were chosen because they are specifically responsible for cell death, whereas the others are linked to inflammation or activation of other enzymes [7, 8]. After sequencing the proteins, Shrestha and Clark were able to identify changes in the folding structures and sequences that could activate effector caspases in tumor cells without triggering cell death in healthy cells.

Specifically, they confirmed two folding precursors in the creation of caspase-6 and -7 proteins. While these precursors had already been discovered in caspase-3, the discovery was significant in understanding how the caspases worked in a normal cell and how they were altered in a cancer cell. Shrestha and Clark noted mutations that slow the formation of these precursors, which led to the production of caspases greatly slowing down, causing a cell to not die when it needs to [2]. Understanding this regulatory process may allow researchers to discover a way to reactivate caspase production in cancer cells.

The vast majority of data collected in the study was information on how stable these proteins are and where they evolved since our common ancestor 650 mya. They found that caspase-6 was the most stable out of the three, and at lower pH’s, caspase-6 is the only one that does not unfold irreversibly [2]. This suggests a more specialized role for caspase-6 compared to 3 and 7, and the data may be useful for the adaptation of cancer-targeting drugs. For example, if a cancer aggregate is in a low pH environment of the body such as the stomach, a cancer-targeting drug may utilize caspase-6 specifically to activate programmed cell death.

While the results are still fairly recent and have not had adequate time to be implemented into a treatment, Morteza Khaledi, dean of the College of Science at the University of Texas at Arlington, was incredibly excited about the results. In a press statement to the University of Texas at Arlington, he announced that the research had yielded “vital information about the essential building blocks for healthy human bodies” and that the knowledge gained from the study will be seen as “another weapon in our fight against cancer” [7].

 

References:

  1. https://www.sciencedirect.com/topics/medicine-and-dentistry/effector-caspase 
  2. https://www.sciencedirect.com/science/article/pii/S0021925821010528?via%3Dihub 
  3. https://www.nature.com/articles/nrg3540 
  4. https://onlinelibrary.wiley.com/doi/10.1002/bip.23086 
  5. https://www.sciencedaily.com/releases/2022/01/220112094022.htm 
  6. https://link.springer.com/chapter/10.1007/978-1-4614-3229-6_4?utm_source=getftr&utm_medium=getftr&utm_campaign=getftr_pilot 
  7. https://portlandpress.com/biochemj/article/476/22/3475/221018/Resurrection-of-ancestral-effector-caspases 
  8. https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.0020069#:~:text=The%20ML%20method%20was%20the,average%20accuracy%20is%200.4%25
  9. https://www.sciencedirect.com/science/article/pii/S2213020916302336

Reviewing Methods of Studying Epigenetic Drift in Monozygotic Twins

By Pranjal Verma, Neurobiology, Physiology, and Behavior ‘25

 

Introduction

Twin births made up 3.11% of American live births in the year 2020 [1]. There are two types of twin pairs: monozygotic (MZ), or those consisting of identical genomes, and dizygotic (DZ), or those consisting of genomes with 50% similarity (the same as siblings) [2]. The identical nucleotide sequences of MZ twins can be used to monitor chemical changes to gene expression due to environmental factors over time, the study of which is known as epigenetics [3-5]. 

Such environmental effects on the genome result in a phenomenon known as epigenetic drift, in which the epigenome changes over time. Specifically in MZ twins, older twin pairs tend to display more epigenetic differences than their comparatively younger twin pairs [6, 7]. This may be why twins start to differ in physical characteristics as they age. Currently, the most common epigenetic changes that have been observed are the methylations of cytosines at cytosine-guanine base pairs, as well as the acetylation of histones [3]. In this review, we will consider both modifications separately and will discuss studies investigating both types of genetic modifications.

Acetylation and Methylation of the Genome on a Molecular Level

Figure 1. Variations in the methods of histone modification. Above pictured are methylated histones and condensed DNA, which are noncoding. Below, pictured are acetylated histones and decondensed DNA which are ready to code for gene expression.

Mechanism of Histone Acetylation

Histone acetylation is one of the most commonly observed methods by which genes are activated in the epigenome. Acetylation changes the structure of the chromatin in the genome, typically by the effects of transcription activators known as histone acetyltransferases (HATs) and histone de-activators known as histone deacetylases (HDACs) [8, 9]. Interactions between HATs and HDACs determine the nature of the epigenome. For example, environmental stimuli may trigger cellular signaling pathways that activate factors which recruit HATs [10, 11]. Additionally, proteins binding to methylated DNA possess a transcription-regulatory domain, which recruit HDACs to the site [12]. Thus, an increase in methylation due to the environment could increase the amount of HDACs present.

Mechanism of DNA Methylation

Another common method of epigenetic change is through cytosine methylation in CpG sites, or a guanine nucleotide following a cytosine nucleotide in the 5’3’ direction [13]. This methylation is primarily caused by the transfer of a methyl group onto the C5 position of cytosine by DNA methyl transferases (DNMTs), and can be used as an epigenetic clock to determine chronological age [14]. DNA methylation also may have different effects in various regions of the genome—for example, DNA methylation can regulate tissue-specific gene expression, but is also important in X chromosome inactivation [15].

Studies Conducted on Epigenetic Drift

Longitudinal Studies

Few longitudinal studies have been conducted to measure epigenetic drift by assessing changes in DNA methylation.  Longitudinal studies, in which the same subjects are followed throughout different stages of their lives, allow for an increased understanding of changes in gene expression over time, as the data is received from the same person [16]. Martino et al. conducted one of these studies, measuring the DNA methylation of CpG sites of buccal epithelial cells, or constantly shed squamous epithelial cells lining the inside of the cheek, of MZ and DZ twins from ages birth to 18 months. The study found that while the DNA methylation of some twins became more dissimilar over time, some twins were actually more similar as they aged. This phenomenon of becoming more similar over time is known as epigenetic convergence, and is thought to be an effect of the tendency of a population to regress to the mean, a phenomenon in which a group displays characteristics closer to the average as opposed to more extreme values [17]. Similarly, it was also found that of the twins who displayed epigenetic convergence, most had higher rates of dissimilarity at birth and were thus moving toward the average DNA methylation. Nevertheless, high rates of all epigenetic change were found after birth in this study, and it was concluded that such changes arose from stochastic and environmental factors, including the maternal environment from which they were born and their exposure to famine [18-20]. 

Wong et al. conducted a similar study with human twins from ages 5 to 10 years, and found that inter-pair correlation in the DNA methylation of the dopamine receptor 4 gene (DRD4) was similar in both MZ and DZ twins. This finding suggests that these correlations in the DRD4 gene are influenced by factors that impact these twins who live in the same environment, and indicates that DRD4 methylation may not be heritable. They also found significant low correlations between the DNA methylation of the serotonin transporter gene (SLC6A4/SERT) of MZ and DZ twins, indicating that the methylation of SERT is attributable to the unique environmental experiences of each child, and is not heritable. These findings show that the environment has a visible impact on the methylation of the genome, and that this impact begins taking place early in life [21]. 

Cross-sectional Studies

Cross-sectional studies have supported the existence of epigenetic drift, and have indicated that accordingly epigenetic differences increase with age [19]. Cross-sectional studies describe studies in which different subjects are studied at varying life stages. These studies are useful for understanding the relationship between exposures or diseases to the epigenome, but cannot be used to track genetic changes in the same subject over time, given that these studies use point measurements with different individuals [16]. Fraga et al. studied 80 Caucasian twins from Spain, and measured the acetylation of global histone H3 and H4 and DNA methylation via total 5-Methylcytosine content. In the case of acetylation in the MZ twin pairs, the youngest pairs were the most epigenetically similar, while the older pairs were much more distinct from one another. This implies that the MZ pairs grew to be more epigenetically distinct over their lifetimes, and supports the existence of epigenetic drift. Similar to what they discovered in regards to acetylation marks, Fraga et al. also found that twin pairs with the most different methylation sequences were either older, had spent more time apart, or had experienced different natural health difficulties. In other words, younger MZ twin pairs were more epigenetically similar in regards to their methylation patterns. These findings indicate strongly that there is a direct correlation between increasing age and the number of epigenetic differences among MZ twin pairs [22]. 

Conclusion

Epigenetic drift is the phenomenon in which the genome changes over time due to stochastic environmental factors, and is the reason that the genomes of older MZ twins are more different than those of younger MZ twins [3, 4]. MZ twin studies, both longitudinal and cross-sectional, allow for the examination of both time-related and exposure-related changes in the genome, and ultimately help create a better understanding of the dynamic nature of the epigenome. 

The cause of such epigenetic change is also yet to be determined, as changes in gene expression continue well after reproductive age, meaning that these epigenetic changes cannot be passed down to the next generation. Interestingly, the changes induced by the environment may also be nonadaptive, further confusing the purpose of epigenetic drift [3]. Further longitudinal studies must be conducted in order to determine these effects, and to ultimately help understand the purpose of epigenetic drift.

 

References:

  1. “FastStats – Multiple Births.” (2022). Centers for Disease Control and Prevention. https://www.cdc.gov/nchs/fastats/multiple.htm. 
  2. Petronis A. (2006). Epigenetics and twins: Three variations on the theme. Trends in Genetics, 22(7), 347-350. https://doi.org/10.1016/j.tig.2006.04.010
  3. Martin G.M. (2005). Epigenetic drift in aging identical twins. Proceedings of the National Academy of Sciences, 102(30), 10413-10414. https://doi.org/10.1073/pnas.0504743102
  4. Jablonka E. and Lamb M.J. (2002), The Changing Concept of Epigenetics. Annals of the New York Academy of Sciences, 981: 82-96. https://doi.org/10.1111/j.1749-6632.2002.tb04913.x
  5. Bell J.T. and Spector T.D. (2011). A twin approach to unraveling epigenetics. Trends in Genetics, 27(3), 116-125. https://doi.org/10.1016/j.tig.2010.12.005
  6. Li Y. and Tollefsbol T.O. (2016). Age-related epigenetic drift and phenotypic plasticity loss: implications in prevention of age-related human diseases. Epigenomics, 8(12), 1637–1651. https://doi.org/10.2217/epi-2016-0078
  7. Tan Q., Heijmans B.T., Hjelmborg J.B., Soerensen M., Christensen K., Christiansen L. (2016). Epigenetic drift in the aging genome: a ten-year follow-up in an elderly twin cohort. International Journal of Epidemiology, 45(4), 1146–1158. https://doi.org/10.1093/ije/dyw132
  8. Turner B.M. (2000). Histone acetylation and an epigenetic code. BioEssays, 22(9), 836-845. https://doi.org/10.1002/1521-1878(200009)22:9
  9. Gallinari P., Di Marco S., Jones P., Pallaoro M. and Steinkühler, C. (2007). HDACs, histone deacetylation and gene transcription: from molecular biology to cancer therapeutics. Cell Research, 17, 195–211. https://doi.org/10.1038/sj.cr.7310149
  10. Kappeler L. and Meaney M.J. (2010). Epigenetics and parental effects. BioEssays, 32, 818-827. https://doi.org/10.1002/bies.201000015
  11. Meaney M.J. and Szyf M. (2005). Maternal care as a model for experience-dependent chromatin plasticity?. Trends in Neuroscience, 28(9), 456-463. https://doi.org/10.1016/j.tins.2005.07.006
  12.  Sweatt J.D. (2009). Experience-Dependent Epigenetic Modifications in the Central Nervous System. Biological Psychiatry, 65(3), 191-197. https://doi.org/10.1016/j.biopsych.2008.09.002
  13. Haerter J.O., Lövkvist C., Dodd I.B., and Sneppen K. (2014). Collaboration between CpG sites is needed for stable somatic inheritance of DNA methylation states. Nucleic Acids Research, 42(4), 2235–2244. https://doi.org/10.1093/nar/gkt1235
  14. Unnikrishnan A., Freeman W.M., Jackson J., Wren J.D.., Porter H. and Richardson A. (2019). The role of DNA methylation in epigenetics of aging. Pharmacology & Therapeutics, 195, 172-185. https://doi.org/10.1016/j.pharmthera.2018.11.001
  15. Moore L.D., Le T. and Fan G. (2013). DNA Methylation and Its Basic Function. Neuropsychopharmacology, 38, 23–38. https://doi.org/10.1038/npp.2012.112
  16. Ng J.W.Y, Barrett L.M., Wong A., Kuh D., Smith G.D., and Relton C.L. (2012). The role of longitudinal cohort studies in epigenetic epidemiology: challenges and opportunities. Genome Biology, 13(6), 246. https://doi.org/10.1186/gb-2012-13-6-246
  17. Barnett A.G., van der Pols J.C., Dobson A.J. (2005). Regression to the mean: what it is and how to deal with it. International Journal of Epidemiology, 34(1), 215–220. https://doi.org/10.1093/ije/dyh299
  18. Theda C., Hwang S.H., Czajko A., Loke Y.J., Leong P. and Craig J.M. (2018). Quantitation of the cellular content of saliva and buccal swab samples. Scientific Reports, 8, 6944. https://doi.org/10.1038/s41598-018-25311-0
  19. Martino D., Loke Y.J., Gordon L., Ollikainen M., Cruickshank M.N., Saffery R. and Craig J.M. (2013). Longitudinal, genome-scale analysis of DNA methylation in twins from birth to 18 months of age reveals rapid epigenetic change in early life and pair-specific effects of discordance. Genome Biology, 14, R42. https://doi.org/10.1186/gb-2013-14-5-r42
  20. Ollikainen M., Smith K.R., Joo E.J., Ng H.K., Andronikos R., Novakovic B., Aziz N.K.A., Carlin J.B., Morley R., Saffery R. and Craig J.M. (2010). DNA methylation analysis of multiple tissues from newborn twins reveals both genetic and intrauterine components to variation in the human neonatal epigenome. Human Molecular Genetics, 19(21), 4176–4188. https://doi.org/10.1093/hmg/ddq336
  21. Wong C.C.Y., Caspi A., Williams B., Craig I.W., Houts R., Ambler A., Moffitt T.E. and Mill J. (2010). A longitudinal study of epigenetic variation in twins. Epigenetics, 6, 516-526. https://doi.org/10.4161/epi.5.6.12226
  22. Fraga M.F., Ballestar E., Paz M.F., Ropero S., Setien F., Ballestar M.L., Heine-Suñer D., Cigudosa J.C., Urioste M., Benitez J., Boix-Chornet M., Sanchez-Aguilera A., Ling C., Carlsson E., Poulsen P., Vaag A., Stephan Z., Spector T.D., Wu Y., Plass C., and Esteller M. (2005). Epigenetic differences arise during the lifetime of monozygotic twins. Proceedings of the National Academy of Sciences, 102(30), 10604-10609. https://doi.org/10.1073/pnas.0500398102

Safety and Efficacy of CAR-T cell therapy for Refractory or Relapsed B-cell Acute Lymphoblastic Leukemia

By Palak Arora

Author’s Note: I wrote this review article because it was an assignment for me, for the course UWP102B. We were instructed to choose any topic from the field of biology which presented me with a wide range of possibilities. I was not sure where to begin my search but one day while I was scrolling through social media, I heard about CAR T-cell therapy as a potential cure for cancer and I was very intrigued. The scientific community has been searching for this for a very long time and this new treatment is a huge breakthrough. This article will provide some background and explore research that has been done for CAR T-cell therapy. I want to bring awareness to my readers about this topic and also inspire young scientists to pursue research in this field since there are still many questions that have been left unanswered. 

 

Abstract 

The purpose of this review is to present the readers with an overview of the advancements in CAR T-cell therapy and areas in which more research is needed. CAR T-cell therapy is a modern approach in treating acute lymphoblastic leukemia. The modified T cells target a specific antigen on malignancies and help eliminate them. CD19 and CD22 are the antigens that are currently under investigation by researchers and the goal is to increase remission rates with the least amount of adverse events during recovery and prevent relapse as much as possible. Bispecific targeting of antigens and the subsequent use of allogeneic hematopoietic stem cell transplantation post-treatment are being examined as potential solutions to these challenges however further research is required to confirm these hypotheses and discover new approaches. 

Introduction 

Acute Lymphoblastic Leukemia (ALL) is the most common type of cancer in pediatric patients. This disease affects the blood and the bone marrow such that immature white blood cells are rapidly created from the bone marrow due to a genetic mutation that tells them to keep dividing. Lymphocytes are a type of white blood cells and there are two main types: B cells and T cells. ALL can affect both cell types but primarily affects B cells. In normal cases,  mature B lymphocytes growing in the bone marrow  function in the immune system by producing antibodies. But these leukemic cells are not as good at fighting infections, and as their numbers grow, they take up space that healthy red white blood cells could otherwise take [1]. Each B lymphocyte makes one kind of antibody that is highly specific for one unique antigen. These populations of B-cells are usually inactive, but as soon as one encounters a“non-self” or foreign antigen, the B cell is activated and begins to rapidly expand, producing multiple copies of itself. They display the antibody on their surface and as soon as they bind another antigen, they will also stimulate helper T cells which mount a defense against the “non-self” protein (an antigen).  But the leukemic (cancerous) cells are not able to function properly due to an incomplete maturation process and are not as effective in fighting infections [1]. B-cell ALL is generally treated with chemotherapy or targeted immune cell therapy. After initial treatments, approximately 20% of pediatric and young adult patients relapse, suggesting chemotherapy alone is not enough to treat them [2]. Immunotherapy approaches that redirect T-cells to malignancies have been used in conjunction with chemotherapy, and have been proved to be effective in achieving complete remission (CR) [2]. These approaches include the use of tisagenlecleucel or CAR-T cell therapy, and Blinatumomab, a Bispecific T-cell engager (BiTE) which is a protein that simultaneously activates CD3 on T-cells and an antigen on the malignant cell in order to redirect T-cells towards the malignancy.

This review will focus on research from the years 2018-2022 in order to inform the readers about the current research in the field of CAR T-cell therapy to treat refractory or relapsed B-cell ALL in pediatric and adult patients. CD19 targeted CAR T- cell therapy has been successful in achieving high remission rates in pediatric patients but treating adult patients with B-cell ALL has been a significant challenge due to antigen loss post-infusion leading to a higher proportion of relapses and adverse events. Determining and preventing the risk factors for potential relapse is currently an active area of research. 

Figure 1. Stem cell differentiation: This disease affects the blood and the bone marrow such that immature white blood cells are rapidly created from the bone marrow due to a genetic mutation that tells them to keep dividing. Lymphocytes are a type of white blood cells and there are two main types: B cells and T cells. ALL can affect both cell types but primarily affects B cells.

What is CAR T- Cell Therapy? 

Chimeric Antigen Receptor (CAR) T-cell therapy is a form of immunotherapy in which a patient’s white blood cells are modified and an artificial receptor, CAR, is added to it, allowing for recognition of  specific antigens on cancer cells [3]. Through the process of leukapheresis, white blood cells are extracted and separated. These cells are used to generally target the CD19 antigen and are utilized by the FDA-approved medication tisagenlecleucel-T. After modifications, the cells are added back to the patient’s bloodstream and progress is measured by estimating complete remission rates and through biomarkers like minimal residual disease. Minimal residual disease is a term for the small number of cancer cells left in the body post-treatment. 

Tisagenlecleucel vs Blinatumomab 

Blinatumomab is another FDA-approved medication that is used to treat B-cell ALL. The major difference between the two approaches taken by Tisagenlecleucel vs Blinatumomab is that CAR-T cell therapy uses the 4-1BB co-stimulatory domain which enhances CAR-T cell proliferation while Blinatumomab does not. Verneris et al. (2021) conducted an experiment to indirectly compare the two medications and observed higher CR rates with tisagenlecleucel (82%) than with blinatumomab (39%). They also observed consistent higher overall survival (OS) rates in patients treated with tisagenlecleucel than those treated with blinatumomab. This study by Verneris et al. (2021) was informed by two major studies: ELIANA and MT103-205. The researchers utilized patient data from these previous studies to determine which immunotherapy approach, Blinatumomab or Tisagenlecleucel, is more safe and effective in treating acute lymphoblastic leukemia in pediatric and young adult patients. The previous studies were single-arm, so no comparisons could be made between the two types of treatment. However, Verneris et al. (2021) controlled for patient variables and used statistical analysis to make an indirect comparison between the two. They observed higher CR rates with tisagenlecleucel (82%) than with blinatumomab (39%). They also observed consistent higher OS rates in patients treated with tisagenlecleucel than those treated with blinatumomab. A potential third variable problem that could also explain these results is that patients in the ELIANA trial were heavily pre-treated before tisagenlecleucel infusion while those in the MT103-205 trial were not. Moreover, the ELIANA trial included pediatric and young adult patients while MT103-205 only included pediatric patients. This poses a significant difference in median ages, possibly affecting the results. The sample size in this study was large enough to be generalizable and the results proved to be statistically significant with a p-value of <0.0001for CR rates and a p-value of <0.001 for OS rates. This study is of relevance to newly enrolled patients with B-cell ALL who are considering their treatment options and might benefit by being able to compare these two types of treatments even if it was an indirect comparison [2]. While this indirect comparison provides a good starting point, further double-arm studies are needed to confirm these results. 

Current challenges in CAR T- cell therapy 

Dosage and Side effects 

For CAR T-cell therapy to be effective, a minimum number of cells (108 CAR T-cells) need to be infused. Higher number of cells increases the chances of achieving remission. But increasing the number of cells also increases the side effects experienced by patients. One of the most common side effects experienced by 80% of patients is Cytokine Release Syndrome (CRS) which is a life-threatening consequence characterized by the dysfunction of multiple organs. Other side effects include febrile neutropenia characterized by a high risk of infection (experienced by 40% subjects), unresolved hematopoietic cytopenia by day 28 (characterized by a reduction in mature blood cells), transient neuro-psychiatric events, and tumor lysis syndrome (a large amount of tumor cells simultaneously releasing their contents into the bloodstream) [4]. Balancing the dosage of CAR T-cells without knowing which patients are at a higher risk of developing these side effects remains a significant challenge and is a potential area for research. 

Antigen loss 

CAR T-cell therapy functions by targeting antigens on malignant cells using artificial receptors. Researchers have observed, however, that after the initial infusion, more than 60% of these patients relapse due to CD19 antigen loss [5]. Without the antigen on the leukemic B-cells, the modified T cells are not able to bind to and eliminate them. This also means that a second infusion of CAR T-cells would not be helpful since CAR T-cell persistence is not the problem. It is not yet known whether antigen loss can be prevented but researchers are still looking into biomarkers for relapse. 

Predicting and preventing relapse

Minimal residual disease (MRD) 

Observing minimal residual disease is the most common way to determine the safety of CAR T-cell therapy in a clinical setting. There are two ways to measure this: Next-generation sequencing (NGS) and Flow cytometry. The NGS assay sequences and tracks rearranged tumor-specific immunoglobulin sequences while flow cytometry utilizes blood and bone marrow samples to measure the physical and chemical characteristics of individual cells to estimate the number of malignant cells. MRD positivity has been hypothesized to be a predictor of relapse and in their study, Pulsipher et al. (2022) present statistically significant results to affirm the hypothesis. The researchers also compared the two methods, NGS assay, and Flow cytometry, and learned that the Next Generation Sequencing technique was much more sensitive than Flow cytometry in detecting MRD positivity. After looking at multiple potential biomarkers and demographic characteristics, the researchers found no significant effect of age, cytogenic or genetic risk, sex, or prior therapy on relapse within a year after tisagenlecleucel infusion. They did however observe that persistence of B-Cell aplasia (CAR T-cells damaging normal B-cells) and detection of NGS-MRD to be high-risk factors for relapse [6]. Current research has been focusing on methods like allogeneic hematopoietic stem cell transplantation (allo-HSCT) to reduce MRD positivity and therefore prevent early relapse. Researchers are also looking into other possible antigens that might help reduce relapse rates. 

Targeted Antigens 

CAR-T cell therapy is designed to target specific antigens on malignant cells. The most common target is the CD19 antigen whose expression is maintained in most B-cell malignancies. According to the researchers, CD19 CAR-T cell therapy has proven to be a usually successful treatment option with a 70-90% complete remission rate for patients with relapsed or refractory B-cell ALL and yet a large proportion of patients have relapsed within a year [5]. The study by Maude et al. (2018) found that the overall remission rate was 82% in their sample of 50 patients using CD19 CAR T-cell therapy. The researchers used polymerase chain reaction (PCR) to detect Tisagenlecleucel in the peripheral blood and found no relationship between dosage and Tisagenlecleucel expansion. In their analysis of the safety of Tisagenlecleucel, they found that at least one adverse event occurred in all patients. These adverse events included, but are not limited to, cytokine release syndrome, pyrexia, decreased appetite, febrile neutropenia, and headache. Most importantly, 19 patients died after Tisagenlecleucel infusion. The study concludes that tisagenlecleucel produces high remission rates in high-risk pediatric and young adults with relapsed or refractory B-cell ALL although there are significant risks associated with this approach [7]. 

Therefore an alternative antigen CD22 is also under investigation as a potential solution. Shah et al. (2020) investigated the lack of alternative immunotherapy treatment options for people with B-cell ALL who have relapsed after CD19 CAR-T cell therapy. They learned that 86.2% of the participants developed Cytokine Release Syndrome (CRS), Hemophagocytic Lymphohistiocytosis (HLH) like toxicities only developed in patients who had CRS, and peak

CAR expansion occurred 14-21 days after infusion. The researchers observed that 70.2% of the participants achieved complete remission, and 87.5% of these were found to be negative for Minimal residual disease (MRD) by Flow cytometry. And 75% of the participants experienced a relapse. They conclude that CD22 CAR-T cell therapy is a highly effective treatment option for those who have experienced relapse after CD19 CAR-T cell therapy or for those who were resistant to it [8]. Moreover, the study by Pan et al. (2019) has also shown promising results with a 70.5% complete remission rate among patients with refractory or relapsed B-cell ALL [5]. Targeting these antigens by themselves has achieved high CR rates and yet relapse continues to be an issue. 

Another alternative is to target the CD19 and CD22 antigens simultaneously. In this approach, researchers can edit the patient’s T cells in a way that it has a receptor for both the CD19 and CD22 antigens. This technique is used to overcome antigen loss—If one of the antigens is lost, another is still available for T-cells to target, therefore improving response rates to therapy. After reviewing the results of their experiments Dai et al. (2020) conclude that these bispecific CD19/22 CAR-T cells might provide a good alternative for adult patients with B-cell ALL who are ineligible for other treatments as this immunotherapy option is able to prevent antigen escape without an increased risk of toxicity [9]. 

Figure 2. CAR T-cells are programmed to attack specific antigens like CD19 or CD22 on malignant cells.

Allogeneic hematopoietic stem cell transplantation (allo-HSCT) 

Allogeneic hematopoietic stem cell transplantation (allo-HSCT) is a procedure in which healthy stem cells (blood-forming cells) are transferred from a donor to a patient in order to replace the patient’s own stem cells. This procedure can be used post CAR T-cell therapy as a way to increase MRD negativity by increasing the number of healthy blood cells to improve patient recovery. In their study, Jiang et al. (2019) claim that despite the promising results of CAR-T cell therapy, patients are at a high risk of relapse due to antigen escape from tumor cells and reduced CAR-T cell persistence. They, therefore, decided to investigate whether subsequent allo-HSCT could increase minimal residual disease (MRD) negativity. This was a quantitative study that observed the number of patients with adverse effects post-treatment, the one-month remission rate (CR), overall survival (OS), event-free survival (EFS), relapse-free survival (RFS), and in vivo persistence of CAR-T cells. The researchers utilized real-time quantitative polymerase chain reaction (qPCR) to measure the level of CAR gene and flow cytometry to calculate the percentage of CAR-T cells in the peripheral blood and bone marrow post-infusion. Jiang et al. (2019) observed significant differences in the EFS and RFS rate between the two groups. However, no significant differences were observed in the overall survival rate [10]. 

Several studies have attempted to investigate the effects of allo-HSCT in combination with CAR-T cell therapy but so far, the results have been limited and on multiple occasions contradictory. Since this is an urgent problem that needs to be addressed, Jiang et al. (2019) designed their study in an attempt to replicate and confirm the benefits of allo-HSCT. While this study observed an improvement in EFS in their experimental group, there was a previous study that observed no such differences. Jiang et al. (2019) caution that while their study has been indicative of trends, due to the short-term follow-up and non-randomization, further studies need to be conducted on a larger scale to obtain more reliable and valid results [9].

Conclusion 

In conclusion CAR T-cell therapy is an effective way to treat B-cell ALL and it has been able to achieve complete remission in up to 90% of patients in clinical trials. Achieving complete remission is relatively easier in pediatric patients but both pediatric and adult patients are still very likely to suffer from relapse within the same year due to antigen loss among other reasons. This field needs a lot of research to improve patient outcomes, achieve higher remission rates, and prevent relapse as much as possible. While researchers have been able to identify high risk factors for relapse such as B-cell aplasia and minimal residual disease positivity, it is not yet known how these can be effectively reduced. Allogeneic hematopoietic stem cell transplantation might be one potential way to prevent relapse but so far, the results are very limited, and they often contradict each other. Clinical trials with a larger sample size are needed in this area of research to provide accurate and reliable results. Overall, CAR T-cell therapy has proven to be one of the greatest advancements in cancer treatments in the past ten years but the treatment plan needs a lot more refinement before it can be used to its full potential.

 

References:

  1. National Cancer Institute. 2021 Nov. 19. Adult Acute Lymphoblastic Leukemia Treatment (PDQ®)–Patient Version. Accessed 2022 Feb 11. Available from: https://www.cancer.gov/types/leukemia/patient/adult-all-treatment-pdq
  1. Verneris MR, Ma Q, Zhang J, et al. 2021. Indirect comparison of tisagenlecleucel and blinatumomab in pediatric relapsed/refractory acute lymphoblastic leukemia. Blood Advances. 5(23):5387-5395. doi:10.1182/bloodadvances.2020004045 
  2. American Cancer Society. CAR T-cell Therapy and Its Side Effects. Accessed 2022 Feb11. Available from: https://www.cancer.org/treatment/treatments-and-side-effects/treatment-types/immunothe rapy/car-t-cell1.html
  3. Thomas X, Paubelle E. 2018. Tisagenlecleucel-T for the treatment of acute lymphocytic leukemia. Expert Opinion on Biological Therapy. 18(11):1095-1106. doi:10.1080/14712598.2018.1533951
  1. Pan J, Niu Q, Deng B, et al. 2019. CD22 CAR T-cell therapy in refractory or relapsed B acute lymphoblastic leukemia. Leukemia 33(12): 2854-2866. doi:10.1038/s41375-019-0488-7
  1. Pulsipher MA, Han X, Maude SL, et al. 2022. Next-Generation Sequencing of Minimal Residual Disease for Predicting Relapse after Tisagenlecleucel in Children and Young Adults with Acute Lymphoblastic Leukemia. Blood Cancer Discovery 3(1):66-81. doi:10.1158/2643-3230.BCD-21-0095 
  2. Maude SL, Laetsch TW, Buechner J, et al. et al. 2018. Tisagenlecleucel in Children and Young Adults with B-Cell Lymphoblastic Leukemia. New England Journal of Medicine 378:439-448.
  3. Shah NN, Highfill SL, Shalabi H, et al. 2020. CD4/CD8 T-Cell Selection Affects Chimeric Antigen Receptor (CAR) T-Cell Potency and Toxicity: Updated Results From a Phase I Anti-CD22 CAR T-Cell Trial. JCO. 38(17):1938-1950. doi:10.1200/JCO.19.03279
  1. Dai H, Wu Z, Jia H, et al. 2020. Bispecific CAR-T cells targeting both CD19 and CD22 for therapy of adults with relapsed or refractory B cell acute lymphoblastic leukemia. Journal of Hematology & Oncology. 13(1):30. doi:10.1186/s13045-020-00856-8 
  2. Jiang H, Li C, Yin P, et al. 2019. Anti-CD19 chimeric antigen receptor-modified T-cell therapy bridging to allogeneic hematopoietic stem cell transplantation for relapsed/refractory B-cell acute lymphoblastic leukemia: An open-label pragmatic clinical trial. American Journal of Hematology. 94(10):1113-1122. doi:10.1002/ajh.25582

Identifying R loops with DNA/RNA ImmunoPrecipitation sequencing technology

Aditi Goyal, Genetics & Genomics, Statistics ‘22

 

Abstract:
Non-Beta structures are nucleic acid structures that do not follow the classic beta-helix structure described by James Watson, Francis Crick, and Rosalind Franklin [1]. R loops are a class of non-B structures and are estimated to occur across 5 percent of the human genome [1]. R loops occur when RNA strands bind to DNA, creating a DNA/RNA hybrid [1]. These structures have been implicated in several biological mechanisms, including gene regulation and DNA replication [1]. In order to further understand the purpose of R loops and their impact, one must first understand where they occur [2]. To study the location of these structures, scientists employ a technique called DRIP sequencing ([DNA/RNA ImmunoPrecipitation sequencing]) [1]. This technique utilizes the standard immunoprecipitation coupled with high throughput sequencing protocol that is commonly used in ChIP seq studies [1]. As with any sequencing technique, several modifications have been made, resulting in various types of DRIP seq protocols [1]. This literature review aims to summarize some of the more common techniques employed in R loop research [2]. This review relies on a compilation of primary research papers that document the development of these techniques [3]. It discusses the variations of each technique and identifies situations where one method may be preferred over another [4]. Further, it provides insight into the drawbacks of each method and identifies areas of improvement for these types of sequencing studies [4]. Finally, this review also highlights further areas of research inspired by the data generated from DRIP seq experiments [5]. 

 

Introduction

In all organisms, maintaining genome stability is critical for biological function. Structures that threaten the overall stability and structure of an organism’s genome, therefore, are of high importance in the scientific community, as they may provide insight into several biological mechanisms. R loops are no exceptions. R loops are DNA/RNA hybrids that are formed when an RNA strand hybridizes onto a double helix DNA molecule [1]. This hybrid structure displaces one of the two DNA strands, creating the ‘loop’ structure. This structure does not follow the classic beta-helix structure described by Watson, Crick, and Franklin, and is therefore known as a type of non-B DNA structure. Non-B DNA structures are somewhat common, occurring in approximately 13 percent of the human genome [2]. R loops occur across 5 percent of the genome. These structures disrupt DNA regulation and maintenance and are therefore a critical topic of study for understanding gene regulation [2, 5, 6]. 

Figure 1: 3 stranded structure describing an R loop. RNA strand, illustrated in blue, displaces the purple DNA strand, creating a loop structure. The s9.6 antibody, illustrated in green, recognizes DNA/RNA hybrids. 

Understanding the genomic context of a regulatory element can provide insight into its function. Therefore, a key question when studying R loops is asking where these loops form along the genome. Several studies have aimed to characterize where R loops form along the genome [3-5]. In general, R loops are not sequence-dependent. They tend to prioritize location relative to the gene body, as opposed to a specific sequence pattern. These structures seem to occur before the first intronic region of a gene. Studies also show that these structures target promoter regions within a gene. These conclusions further support the idea that R loops interact more heavily with the structure of DNA, as opposed to the sequence. Specifically, R loop formation may depend on the accessibility of the DNA strand itself, as researchers have shown that R loops tend to form in unmethylated CpG islands (regions of the genome, primarily near promoters, that contain a large number of “CG” dinucleotide repeats) [4]. The Chedin lab at UC Davis proposes the theory that R loops prevent the methylation of transcription start sites, thereby promoting the transcription of certain genes [4]. This discovery further supports the theory that R loops are regulatory elements and play a part in gene expression regulation. Of course, given the structural instability R loops cause, they are hypothesized to have positive and negative effects on overall gene regulation and maintenance. 

To understand where R loops form along a genome, we need a technology that captures this hybridization and allows us to map these regions back to a reference genome. The most used methodology for this purpose is DNA-RNA immunoprecipitation sequencing or DRIP seq for short. This review aims to provide an overview of the technology, some of the commonly used modifications used in the field and highlight the potential benefits and drawbacks of the technology. Finally, this paper proposes further areas of research; DRIP seq is a critical tool for studying R loop biology and warrants the development of analytical tools designed for processing DRIP seq specific data. 

 

DRIP seq Protocol

Like ChIP seq protocols, DRIP seq utilizes an antibody to precipitate RNA sequences that have been cross-linked to DNA sequences. Specifically, most DRIP seq protocols rely on the S9.6 antibody, due to its high specificity and affinity for DNA/RNA hybrids [7]. As a control, genomic samples are treated with RNase H before immunoprecipitation [8]. RNase H, short for Ribonuclease H, is an enzyme active in DNA replication. RNase H recognizes DNA/RNA hybrids, which occur naturally in Okazaki fragments from RNA primers, and degrades the RNA. By treating with RNase H, we can degrade R loops present in the sample, leaving  genomic DNA behind. Researchers have shown that Rnase H treatment can effectively remove R loops that disrupt DNA replication mechanisms [8].

DRIP seq is coupled with high throughput sequencing and is used in conjunction with a peak calling algorithm. Peak calling algorithms identify regions of interest in the genome. Sequence reads are aligned to a reference genome, and then processed via one of many peak calling algorithms, the most common one being MACS (Model-based Analysis of Chip Seq data). MACS analyzes the aligned data, and identifies “peaks”, or areas where there is a significant pileup of sequenced data. These peaks indicate areas of interest, and inform the researcher where their target region is. At its core, DRIP seq performs a very essential task of informing researchers where R loops occur. However, given the intricacies of this research, there are several drawbacks and assumptions involved in using DRIP seq. 

bisDRIP seq

One major drawback of DRIP seq is its lack of resolution. The s9.6 antibody capture technique used in DRIP seq successfully identifies DNA/RNA hybrids. However, these regions are often too broad, as DRIP seq cannot identify which regions of DNA directly bind to RNA, and which regions of DNA are flanking regions [9]. This resolution is important for understanding how R loops impact promoter regions, which are sequence-specific entities [9, 10]. Additionally, defining the boundaries of an R loop can help us understand which elements of a gene R loops directly interact with. bisDRIP seq ([bisulfite DRIP sequencing)] was developed as a method to study where R loops localize within promoter regions [9]. Bisulfite treatment is a commonly used mutagenesis technique. This chemical treatment mutates unprotected cytosines into uracil nucleotides. In this application, researchers target the ‘open’ cytosines, which are present on the displaced DNA strand. Any cytosines on the displaced DNA strand mutate into uracils. In contrast, the cytosines present in DNA that are part of the DNA/RNA hybrid are protected from the bisulfite because they are bound. As a result, these cytosines remain unchanged. Based on the region of DNA mutated on the single strand, we can define the boundaries of the DNA/RNA hybrid. The developers of this method, a team at Cornell, discovered that R loops generally have boundaries defined by the transcription start site and the first exon-intron junction [9]. This implies that R loops are variable in length, depending on the length of the first exon. 

While this technique offers high resolution, it also relies on the presence of cytosines in the region. R loops have been shown to localize in regions with high GC content [4], but in situations where this is not the case, this resolution may not be attained simply due to a lack of cytosines. Another possibility is that even in a GC-rich region, the displaced strand may be more G rich, as opposed to C. If there are no or few open cytosines on the open strand, this technique will not work. Further, chemical changes of the structure of DNA can introduce great instability and can therefore make this technique difficult to implement. 

Figure 2: Bisulfite treatment will convert open cytosinecytocines to uracil, allowing us to track which regions were affected by the treatment. 

s1-DRIPseq

S1-DRIPseq introduces modifications to the DRIP seq protocol that dramatically improve yield and minimize background noise. The DRIP seq protocol typically uses sonication as a method of shearing DNA fragments before immunoprecipitation. However, this method is grossly ineffective at capturing R loops, as the force of sonication disrupts most R loops present [11]. Specifically, sonication shears the DNA/RNA bond, allowing the RNA strand to be displaced and the DNA strand to re-anneal to its sister strand. S1 nuclease is an enzyme that targets single-stranded nucleic acids, aka the displaced DNA strand. By digesting this single strand, researchers can target the single strand fragments based on size. Moreover, digestion of the single strand increases the stability of the DNA/RNA hybrid, allowing for more of the regions to survive immunoprecipitation. By preserving these R loops, researchers were able to identify approximately 800 novel R loop sites in Saccharomyces cerevisiae, a common model organism for studying R loop biology [11, 12]. Due to its targeted nature, this method also greatly reduces unwanted noise, further improving the resolution of peak calling methods [11]. 

 

DRIP seq Analysis

Much like ChIP seq, the next step after DRIP sequencing and alignment is to utilize some type of peak calling program. These programs are designed to identify regions that have a statistically significant number of reads aligning to that region. This metric is referred to as the “pileup”. Significant pileup indicates that an R loop is present in this region. MACS2 [Model-based Analysis of ChIP seq] has become an industry-standard in analyzing peak data. Given that the protocol for DRIP seq closely resembles ChIP seq, the same program has been utilized to analyze DRIP seq experiments.

Once peaks have been called, they need to be annotated. There exist several types of peak annotators, designed for “universal” data. They utilize different features of the genetic data to create functional annotations. UROPA ([Universal RObust Peak Annotator)] allows users to target any type of genomic feature, along with strand specificity, and anchor positions relative to the feature [13]. Similarly, programs PAVIS and HOMER are common peak calling and annotation methods but were not specifically designed for DRIP seq data [14, 15]. 

To address this need for a DRIP seq specific annotation platform, a team at the University of Bologna developed DROPA ([DRIP Optimized Peak Annotator)] [16]. There are several minor differences between DROPA and the three other peak annotators listed above. The primary difference is that DROPA allows for multiple gene annotations. Recall that R loops can be very long and can span over several gene features. DROPA takes this into account and allows for longer annotations than most peak callers that use gene features as anchor points [16]. While DROPA does not provide antisense peak annotation, it does drastically reduce the number of false-positive annotations to under 7 percent [16]. 

 

Drawbacks and Discussions

DRIP seq is a critical part of studying R loops. However, the process is not perfect. There are major drawbacks to using DRIP seq in R loop identification. Firstly, the s9.6 antibody has been shown to bind to RNA/RNA hybrids, in addition to DNA/RNA hybrids [17]. Additionally, when S9.6 does identify DNA/RNA hybrids, it has been proposed that there is inherently a bias in which DNA/RNA hybrids S9.6 identifies. Research points to a potential nucleotide composition bias within the antibody [18]. Interestingly, a common pattern identified was polyA or polyU. Given that R loops are GC rich, an antibody that is biased towards AU binding indicates that this antibody may result in false positives and false negatives. 

Further, the S9.6 antibody only requires six nucleotides of DNA/RNA binding to identify a “hybrid” [19]. This has positive and negative implications. Only requiring six nucleotides allows this antibody to capture the smallest of R loops, which is important for studying smaller promoter regions. However, it also means that non-R loop structures may be misidentified as R loops. This hyperaffinity, combined with the antibody’s ability to identify RNA/RNA hybrids, implies that this method may identify small interfering RNA complexes along with R loops. 

Another major issue with DRIP seq is the quantitative analysis. Firstly, there is no peak caller for DRIP seq data. As of now, researchers can use MACS2, which was designed for ChIP seq data, or can build a makeshift peak caller. This lack of a standardized method causes large variance between how data is analyzed across different experiments and likely leads to varying results. Additionally, peaks identified with MACS2 may not be an accurate representation of the in-vivo conditions. While ChIP seq and DRIP seq follow very similar protocols, we cannot assume that the data looks the same.

Furthering this point, the analysis of the peaks themselves is somewhat subjective. There is no set standard for what is considered a “peak” when analyzing DRIP seq alignment data. As such, different parameterization with different peak calling methodologies can result in drastically different R loop maps. This lack of standardization is rampant in current research. To combat this problem, a team at Nanjing University has compiled a database of R-loop experiments “R-loopBase”, which features over 11 different technologies, and billions of gene annotations [20]. This database is a fundamental first step towards standardization, and yet it highlights the necessity of a standardized protocol, as it features so many variants in the field. 

This issue extends beyond the analytical component to the preparation of DRIP seq samples as well. As discussed earlier, sonication is a common method of shearing double-stranded DNA during sample preparation. However, if an endonuclease is used, it can drastically alter the results. The Halasz team in Hungary investigated several variables in the DRIP seq lab protocol and concluded that using restriction enzyme digestion overrepresents longer R loops as compared to those in open reading frames [21]. They propose a standardized preparation method to help normalize physical variation between datasets [21]. 

 

Conclusion

R loops remain an elusive subject in molecular biology. They have often been characterized as the double-edged sword of gene regulation. They have been identified as critical components of transcription termination, with evidence pointing to catastrophic results if R loops are removed. And yet, they are undoubtedly a key player in genomic instability and have also been linked to Fragile X syndrome, a genetic condition that can cause intellectual disabilities and cognitive impairment [22]. There is also some evidence to suggest that R loop formation is a contributing factor in Huntington’s disease, breast, ovarian, and colon cancer, as well as Prader’s Willi Disease [23]. Tools like DRIP seq allow us to understand how these elements interact with DNA on a genome-wide scale and provide critical insight into what types of interactions are occurring. Given the inherent entropy of in vivo cell systems, standardization across DRIP seq methodologies is critical, in hopes of reducing noise and improving statistical significance in peak calling algorithms. If more reliable data can be made available, there is huge potential for applications of artificial intelligence in this field. R loop prediction would save researchers countless hours and resources, by potentially allowing them to forgo DRIP seq methodologies and rely on a predictive neural network to tell them whether an R loop is expected to be present at the loci of interest. This pattern detection program could also elucidate the mechanisms behind why R loops tend to form in certain hotspots over others. However, to make these discoveries, we must first develop tools and standards across the entire DRIP seq protocol, both in the lab and in analysis. R loop biology has boomed across the last decade and will only continue to grow. As such, this field demands that we invest the resources in developing tools specific to studying R loops and other non-B DNA structures. 

 

References:

  1. Santos-Pereira, & Aguilera. Nat Rev Genet 16, 583–597 [2015]. 
  2. Wilfried et al. Nucleic Acids Research, Volume 49, Issue 3, 2021.
  3. Malig et al. J Mol Biol. 2020 
  4. Ginno et al. Mol. Cell, 45 [2012], pp. 814-825
  5. Skourti-Stathaki & Proudfoot. Genes Dev, [2014], pp. 1384-1396
  6. Richard & Manley J Mol Biol. 2017;429[21]:3168-3180.
  7. Boguslawski et al. [1 May 1986]. Journal of Immunological Methods. 89 [1]: 123–30.
  8. Zhao et al. [2018]. EMBO reports, 19[5], e45335. 
  9. Dumelie & Jaffrey  [2017, October 26]. 
  10. Chédin. [2016]. TIG, 32[12], 828–838.
  11. Wahba et al. Genes & development, 30[11], 1327–1338.
  12. Chan et al. [2014]. PLoS genetics, 10[4], e1004288.
  13. Kondili et al. Sci Rep 7, 2593 [2017]. 
  14. Huang et al. [2013]. 29[23], 3097–3099.
  15. Homer. Homer Software and Data Download.
  16. Russo et al. BMC Bioinformatics 20, 414 [2019]
  17. Hartono et al. J Mol Biol. 2018 Feb 2;430[3]:272-284. 
  18. Konig et al. PLoS ONE 2017, 12, e0178875.
  19. Phillips et al. 2013, 26, 376–381.
  20. Ruoyao et al. Nucleic Acids Research, Volume 50, Issue D1, 7 January 2022, Pages D303–D315
  21. Halász et al. Genome research vol. 27,6 [2017]: 1063-1073.
  22. Groh et al. PLoS Genet. 2014 May 1;10[5]:e1004318.
  23. Richard & Manley. Journal of molecular biology vol. 429,21 (2017): 3168-3180.

Mitochondrial Dysfunction and Alzheimer’s Disease

By Nathifa Nasim, Neurobiology, Physiology, and Behavior ‘22

Author’s note: Based on my interest in exploring Alzheimer’s pathology, I have been interested in the molecular mechanisms that drive neurodegeneration. After working on a project on mitochondrial blockers and Alzheimer’s disease at the Jin lab at the MIND Institute, I found numerous intersections between neurodegeneration and mitochondrial dysfunction, which I seek to explore in this review.

 

Introduction

Mitochondria are critical for energy production across the body, and are especially crucial in the brain. Not only does the brain require significantly more energy in relation to its mass compared to other organs, but it also has limited glycolytic capacity (the maximum rate of glycolytic ATP production) relying mostly on oxidative phosphorylation for meeting its high energy demands [1]. Due to this, complications with the brain’s mitochondria that affect its capacity for oxidative phosphorylation can have severe consequences on overall cognitive function. Mitochondrial dysfunction has been implicated in the pathology of various neurodegenerative diseases such as Parkinson’s disease, Huntington’s disease, Leber’s hereditary optic neuropathy, and, the focus of this review, Alzheimer’s disease (AD) [2]. Although the exact mechanisms behind the progression of AD is still unclear, recent research points towards various ways in which abnormalities in oxidative phosphorylation, or more specifically, the mitochondrial electron transport chain (ETC) – a series of protein complexes which are the sites of oxidative phosphorylation – result in various types of cellular damage which align with various hallmarks of AD pathology such as atrophy, AB aggregation, and cognitive decline. 

Electron Transport Chain Deficiency

Impaired energy metabolism is one of the earliest and most well-documented signs of AD [4, 5]. As the mitochondria is primarily responsible for cellular energy production, this appears to directly implicate some aspect of mitochondrial dysfunction in the disease pathology. Supporting this, mitochondrial abnormalities in AD brains have been observed even before the emergence of neurofibrillary tangles, one of the key pathological indications of AD; this suggests that mitochondrial dysfunction is one of the earliest steps in AD pathology [4].

Research has verified that the deterioration of energy production in the AD brain was not caused by a lack of mitochondria, but rather deficiency in the electron transport chain [2]. The ETC is one of the means by which the cell produces ATP: four complexes utilize energy from electrons to create a proton gradient, and the influx of protons is coupled to ADP phosphorylation. Parker, et al studying various aspects of the mitochondrial electron transport chain, found an overall decrease in activity of all enzyme complexes involved in the ETC, especially in the cytochrome c oxidase, one of the last steps of the ETC. This was supported by previous research identifying significant decreases in cytochrome c oxidase activity [2, 7]. The brain’s continuous need for energy means that a short period without glucose or oxygen leads to cell death. Therefore, damage to the complexes of the ETC results in neuronal death and atrophy due to the lack of energy production, which is characteristic of AD [1]. 

ETC Damage linked to Free Radical Production

As the ETC is linked to AD characteristics, the ETC is also a source of toxic free radicals, including hydrogen peroxide, hydroxyl, and superoxide, which can lead to cellular damage which also aligns with other AD hallmarks [1]. There are other processes in the cell that also contribute to redox reactions, such as the plasma membrane oxidoreductase system, but we focus on the mitochondria, and specifically the ETC’s production of these free radicals. Oxygen is reduced as the final electron acceptor to drive oxidative phosphorylation. As cytochrome c oxidase is most directly involved with oxygen in this last step, damage to cytochrome c oxidase, as well as the rest of the complexes, can directly increase reactive oxygen species (ROS) [2, 6]. ROS are free radicals which are byproducts of energy metabolism. They are maintained by a balance between production via the ETC and clearance via antioxidants and other enzymes [6, 12]. When the ETC is damaged, the electrons which pass through the chain build up earlier in the chain, such as in complex I, where the electron can be donated to molecular oxygen and create a free radical [1]. Under typical conditions, there are cellular processes in place to neutralize the free radicals, but if there is overproduction exceeding the cell’s capability to transform them, the excess of free radicals creates oxidative stress [1]. 

The effects of free radicals are heightened in the brain, resulting in oxidative damage that aligns with AD hallmarks. As previously mentioned, the brain has a high demand for oxygen in addition to a high iron content, both of which enable ROS production. The brain is also especially vulnerable to ROS damage due to comparatively lower antioxidant defenses. Furthermore, the brain is the final destination of many polyunsaturated fatty acids throughout the body – such as omega-3 fatty acids – and the increased polyunsaturated fatty acids in the membranes are more sensitive to free radical damage due to lipid peroxidation, or when lipids with carbon-carbon double bonds are attacked by free radicals [1]. Synaptic mitochondria are typically more affected by oxidative stress, which leads to synaptic damage and loss, thereby affecting neurotransmission [8]. The organismal effect of this may be cognitive decline, characteristic of AD. Oxidative stress can also lead to atrophy. When EC dysfunction and oxidative stress passes a certain threshold, molecules stored within the mitochondria are released due to increased permeability of its membranes; this is part of the pathway that leads to cell death activation [6]. As mentioned, widespread atrophy or neuronal death is characteristic of AD pathology, which also results in cognitive decline. In addition to these two ways in which ROS is linked to AD, ROS damage is also involved in a positive feedback chain, exacerbating its effects. Additionally, overproduction of ROS induces conformational changes in proteins that affect ETC function causing them to “shut down” the mitochondria; the resulting dysfunction increases ROS levels, creating a cyclical spiral towards widespread atrophy [6]. 

mTDNA, Aging, and Alzheimer’s

Another critical effect of ROS is damage to mitochondrial DNA (mtDNA). Free radicals such as ROS can cause DNA double strand breaks, protein crosslinking, and mutations via base modifications [5]. The mitochondria is especially susceptible to DNA damage as mtDNA lacks histones. In nuclear DNA, histones are proteins that tightly wind DNA, which protects against UV damage, for instance, by reducing the exposed surface area; studies have indicated that this organization protects against free radical damage as well. mtDNA’s lack of histones due to its smaller size results in greater possibility of free radical damage [1, 5]. Moreover, the proximity of the mtDNA to the site of ROS production (in the mitochondria) also increases the likelihood of damage [5]. 

The mtDNA mutations are especially apparent in AD, primarily due to the mutations’ connection to the ETC. Studies have indicated increased oxidative damage of mtDNA in AD patients, notably a three-fold increase compared to healthy brains [5]. A study of AD patients also identified the specific sequences of mtDNA which most commonly suffer damage, and these were linked to the activities of the complexes of the ETC [9], and specifically, to decrease cytochrome oxidase activity [5]. As previously discussed, these damages to the ETC ultimately result in neural loss and damage which may explain the cognitive decline in AD patients [1,6]

Research suggests that ETC activity lowers with age, and one of the hypotheses behind this correlation is the accumulation of mutations with age [6]. As age is one of the risk factors for AD, the question arises whether the accumulation of mtDNA mutations and damage is simply a result of aging as AD is diagnosed later in life. A study exploring this identified higher mutation rates in mtDNA in some, but not a majority, of AD brains. They suggest that although mtDNA mutations increase with age, the mutation rate of some individuals is higher, leading to a higher probability of AD-specific mutations which increase the likelihood of dementia [9]. 

Mitochondrial Damage and AB

Given the involvement of mitochondrial dysfunction in AD pathology, research is being conducted to elucidate the connection between it and one of the primary characteristics of AD: amyloid plaques. Amyloid plaques are conglomerations of AB protein, which results from irregular splicing of the amyloid precursor protein (APP.) The nature of APP’s interaction with mitochondria can be explained either by overproduction of APP leading to mitochondrial dysfunction, or mitochondrial damage somehow triggering amyloid plaques. 

AB has been shown to interfere with mitochondrial function through inhibiting cytochrome oxidase activity, and therefore increasing free radical activity and damage [7]. On the other hand, it has also been observed that inhibition of cytochrome oxidase promotes APP cleavage to AB, resulting in another positive feedback loop where AB inhibits the ETC and causes resulting damage, whereas the inhibition itself also promotes AB [6]. Furthermore, a study found that deficiencies in the ETC, and consequent ATP depletion, increased the possibility of APP cleavage to the AB isoform prone to aggregation, possibly due to more exposure to proteases [3, 10]. This would result in the accumulation of amyloid plaques characteristic of AD. The upregulation of mitochondrial genes in AD patients also supports a connection between the organelle and AD pathology [7], as it may be a compensatory response to the detrimental effects of APP on mitochondrial function. 

One hypothesis to explain the means by which APP interferes with mitochondria is that mutant APP derivatives (the AB isoforms prone to aggregation) enter the mitochondria and disrupt the ETC, thereby generating free radicals [7]. Evidence for this chain of reasoning is that γ secretase, which is needed to cleave APP, is found inside the mitochondria. This suggests that after full length APP enter the mitochondria, they are cleaved there, upon which they may interfere with the mitochondrial proteins [7]. Another possible explanation for the damage to mitochondria was demonstrated by another study which indicated that accumulation of APP blocks mitochondrial protein transport channels, also contributing to mitochondrial dysfunction [4]. 

Conclusion: the Mitochondrial Cascade Hypothesis

Given the mitochondria’s crucial role in the maintenance of cellular bioenergetics, the organelle is likely a critical aspect of numerous facets of neurodegeneration, which are still under research. An emerging “mitochondrial cascade hypothesis,” seeks to highlight the importance of mitochondria in AD pathology. It ties together the various ways in which mitochondrial dysfunction is linked to the cascade of degenerative processes that occur in AD, all of which we have discussed so far. As higher ROS production rates lead to an accumulation of mitochondrial DNA damage, this decreases the ETC’s efficiency, which reduces overall oxidative phosphorylation and increases ROS production. This augmentation of ROS production triggers AB production from APP, leading to increased AB (and therefore amyloid plaques) which in turn also reduce ETC activity. Meanwhile, decreased oxidative phosphorylation and energy production in these neurons results in apoptosis, which in the large scale creates atrophy [6]. As Alzheimer’s is one of many neurodegenerative diseases with no cure, further research into the mitochondrial cascade hypothesis has the potential to expand the limited therapeutics available to treat the disease so far. 

 

References:

  1. Moreira PI, Carvalho C, Zhu X, Smith MA, Perry G. Mitochondrial dysfunction is a trigger of Alzheimer’s disease pathophysiology. Biochimica et Biophysica Acta (BBA). 1802(1): 2-10. doi:10.1016/j.bbadis.2009.10.006.
  2. Parker WD, Parks J, Filley CM, Kleinschmidt-DeMasters BK. 1994. Electron transport chain defects in Alzheimer’s disease brain. Neurology. 44(6):1090-6. doi: 10.1212/wnl.44.6.1090.
  3. Hirai K, Aliev G, Nunomura A, Fujioka H, Russell RL, Atwood CS, Johnson AB, Kress Y, Vinters HV, Tabaton M, Shimohama S, Cash AD, Siedlak SL, Harris PL, Jones PK, Petersen RB, Perry G, Smith MA.2001. Mitochondrial abnormalities in Alzheimer’s disease. J Neurosci. 21(9):3017-23. doi: 10.1523/JNEUROSCI.21-09-03017.2001. 
  4. Anandatheerthavarada HK, Biswas G, Robin M, Avadhani NG. 2003. Mitochondrial targeting and a novel transmembrane arrest of Alzheimer’s amyloid precursor protein impairs mitochondrial function in neuronal cells. J Cell Biol. 161(1): 41–54. doi:10.1083/jcb.200207030
  5. Wang X, Wang W, LI L, Perry G, Lee H, Zhu X. 2014. Oxidative stress and mitochondrial dysfunction in Alzheimer’s disease. Biochimica et Biophysica Acta (BBA). 1842(8):1240-1247. doi:10.1016/j.bbadis.2013.10.015.
  6. Swerdlow RS, Khan SM. 2008. A “mitochondrial cascade hypothesis” for sporadic Alzheimer’s disease. Medical Hypotheses. 63(1): 8-20. doi:10.1016/j.mehy.2003.12.045.
  7. Manczak M, Anekonda TS, Henson E, Park BS, Quinn J, Reddy PH. 2006. Mitochondria are a direct site of Aβ accumulation in Alzheimer’s disease neurons: implications for free radical generation and oxidative damage in disease progression. Human Molecular Genetics. 15(9):1437–1449.doi:10.1093/hmg/ddl066
  8. Reddy PH, Beal MF. 2007. Amyloid beta, mitochondrial dysfunction and synaptic damage: implications for cognitive decline in aging and Alzheimer’s disease. Trends Mol Med. 14(2):45-53. doi: 10.1016/j.molmed.2007.12.002.
  9. Coskun PE, Beal MF, Wallace DC. 2004. Alzheimer’s brains harbor somatic mtDNA control-region mutations that suppress mitochondrial transcription and replication. Proc Natl Acad Sci U S A. 101(29):10726-31. doi:10.1073/pnas.0403649101
  10. Gabuzda A, Busciglio J, Chen LB, Matsudaira P, Yankner BA. 1994. Inhibition of Energy Metabolism Alters the Processing of Amyloid Precursor Protein and Induces a Potentially Amyloidogenic Derivative. J Biol Chem. 269(18): 13623-13628.

Semaglutide: A New GLP-1RA for Type 2 Diabetes Mellitus Treatment

By Saloni Dhaktode, Genetics and Genomics ’22

Author’s Note: My interest in research and biology began with understanding diabetes. This topic is close to my heart because my family is very susceptible to Type 2 diabetes, and many families of various ethnic groups in the U.S. are as well. Each patient has a unique background and lifestyle, which makes them unique in how their body handles both the condition and its treatments. My UWP 104E class with Dr. Nathaniel Williams gave me the opportunity to write this literature review and share one of the newest options available to Type 2 diabetic patients. I hope readers can learn more about how semaglutide is another step to best serving this uniqueness.

 

Introduction

The 2020 National Diabetes Statistics Report by the Centers of Disease Control (CDC) states that 34.2 million people in the United States have diabetes, either diagnosed or undiagnosed. Type 2 Diabetes Mellitus (T2D) accounts for about 90-95% of these cases. In healthy individuals, pancreatic β-cell insulin moves glucose into cells to be converted into energy, thus lowering glucose levels in the bloodstream. T2D is a chronic condition characterized by the body’s resistance to or inadequate production of β-cell insulin. This leads to uncontrolled hyperglycemia, or high blood glucose levels [1]. To compensate for rising blood glucose, the body may produce more insulin than normal, a condition known as hyperinsulinemia [8]. Since the body cannot properly respond to the accumulating insulin, hyperglycemia can persist alongside hyperinsulinemia. 

Semaglutide, a medication used to treat T2D, falls under the class of glucagon-like peptide-1 receptor agonists (GLP-1RAs). An agonist is a chemical that activates a receptor. As an agonist, semaglutide activates receptors that prompt insulin release. Hence, these types of drugs regulate glycemic levels and are also linked to the treatments of obesity and cardiovascular disease, two conditions associated with T2D [2, 3, 6, 7, 9, 10]. Previously released medications of this class include liraglutide, dulaglutide, exenatide, and lixisenatide [6]. Semaglutide was added to the list in 2017 as a longer-acting alternative. GLP-1RAs were only administered through subcutaneous injection until 2019, when the first oral form, a pill version of semaglutide, was approved by the U.S. Food and Drug Administration. The SUSTAIN and PIONEER trials conducted by Novo Nordisk led to the release of semaglutide.

With semaglutide being a relatively recent development, further clinical trials are currently ongoing. This is why semaglutide is not recommended as the first choice for T2D treatment. But it still provides a substantial option for patients who do not see improvements with or have severe adverse reactions to previous treatments, such as Metformin (usually the first choice) or sulfonylureas (which also increase insulin secretion). The National Diabetes Statistics Report indicates an increase in total diabetes cases over the years [1], which calls for new medications that can benefit patients of diverse medical backgrounds. This review analyzes the function and effects of semaglutide using various clinical trials, in order to determine the scope of the drug’s ability to combat insulin resistance and other conditions associated with T2D. 

What is Semaglutide?

Semaglutide is a GLP-1RA, a drug class that mimics the activity of the human glucagon-like peptide-1 (GLP-1) hormone. In particular, semaglutide has a 94% homology to GLP-1 [3, 5]. Glucagon, a hormone produced by pancreatic α-cells, raises blood glucose and induces insulin release to keep glycemic levels balanced. GLP-1 is called “glucagon-like” because it shares similarities with glucagon and enhances insulin secretion. GLP-1 is deficient in T2D patients, which is why semaglutide is designed as an agonist to mimic GLP-1. In this case, semaglutide activates GLP-1 receptors in the pancreas, promoting greater insulin release. By imitating GLP-1, semaglutide is able to lower glycemic levels, commonly indicated by decreased levels of Hemoglobin A1c (HbA1c), as glucose attaches to hemoglobin in the bloodstream [2, 4, 9, 10]. 

Novo Nordisk developed two forms of semaglutide: a subcutaneous injection and an oral pill. They were released and are being sold under the brand names Ozempic (injection) and Rybelsus (pill). Due to the half-life of semaglutide being 6-8 days, which is extended compared to earlier GLP-1RAs, the medication is administered once-weekly [3, 6, 7]. 

While there are no fatal safety issues with semaglutide, adverse effects must be considered. The most frequent effects are mild to moderate gastrointestinal events, such as nausea, vomiting and diarrhea [2, 3, 4, 5, 7, 9, 10] . A chance of hypoglycemia is always present, especially if semaglutide is taken with other antidiabetics. But most of the clinical trials reported low hypoglycemic rates [2, 4, 9, 10]. Significant increases in lipase were also reported [2, 4, 7]. Lipase is an enzyme that helps the body break down fats, but in high levels can be linked to pancreatitis. Semaglutide still requires testing with patients with histories of pancreatitis [3]. However, semaglutide exhibits a similar safety profile to other GLP-1RAs [2, 6, 7, 9, 10], so these effects are not unexpected. 

Semaglutide Administration

Subcutaneous Injection

Subcutaneous injection is the most common form of GLP-1RA medications. The injection is commercially available as Ozempic. It is applied under the patient’s skin, into the tissue layer that lies between the skin and the muscle. The tissue layer has lower blood supply, which allows the medication to enter the bloodstream slowly and in a controlled manner.  

The SUSTAIN 1 clinical trial conducted by Sorli et al. (2017) of Novo Nordisk tested the efficacy of subcutaneous semaglutide monotherapy versus placebo in T2D patients. Participants of 18 years or older with T2D were randomly assigned once-weekly subcutaneous semaglutide (0.5 mg or 1.0 mg) or volume-matched placebos. The testing period was 30 weeks. Results show that HbA1c levels significantly decreased by 1.45% with 0.5 mg and 1.55% with 1.0 mg [9]. The trial confirms subcutaneous semaglutide’s superiority versus the placebo. The Ozempic patient site prescribes a starting dose of 0.25 mg, which increases to 0.5 mg and 1.0 mg if needed. This is in accordance with the doses tested in SUSTAIN 1. 

Oral Pill 

The oral pill form of semaglutide is the first oral version of all GLP-1RAs [2, 4]. Since semaglutide is peptide-based, it is prone to proteolytic damage in the stomach. To overcome this issue, the tablet is co-formulated with sodium N-[8 (2-hydroxybenzoyl) amino] caprylate (SNAC). SNAC enhances the absorption of semaglutide across the stomach’s mucus layer and protects it from proteolytic degradation [2, 4, 6, 9]. 

The PIONEER 1 clinical trial conducted by Aroda et al. (2019) of Novo Nordisk tested the efficacy of oral semaglutide monotherapy against placebo in T2D patients. Participants of 18 years or older with T2D were randomly assigned once-daily oral semaglutide (3 mg, 7 mg, or 14 mg) or a placebo. The testing period was 26 weeks. Results indicate that the largest dose, 14 mg, led to HbA1c levels decreasing by an average of 1.5% [2]. The trial confirms oral semaglutide’s superiority at all dose levels versus the placebo. The Rybelsus patient site prescribes 7 mg or 14 mg tablets, corresponding to the doses tested in PIONEER 1. 

Oral vs. Injection

The availability of two semaglutide products raises the question of which form of administration is more effective in enhancing insulin secretion. The 1.45% to 1.55% HbA1c reductions seen in the SUSTAIN 1 trial are comparable to the 1.5% HbA1c reduction in the PIONEER 1 trial. Each of the trials differed in methods and testing duration, but the percent reductions of HbA1c are very similar, with respect to the doses each trial’s researchers deemed most effective. 

A clinical trial by Davies et al. (2017) assessed the efficacy of oral semaglutide versus subcutaneous semaglutide or placebo in T2D patients. Participants 18 years or older with T2D were randomly assigned to one of five oral semaglutide groups, an oral placebo group, or a subcutaneous semaglutide group. The oral groups’ HbA1c levels significantly reduced by an average of 1.8% and the subcutaneous group’s HbA1c levels by 1.9%. Evidently, there is very little difference between the percent reductions in both group types. [4]. This supports the similarity in HbA1c reduction between the SUSTAIN 1 and PIONEER 1 trials observed earlier. Therefore, it can be concluded that there is no significant difference between either form’s ability to effectively secrete insulin. 

Since there is no obvious advantage or disadvantage between the two forms, the choice between an injection and a pill is open to patients, according to their preference or compatibility with their bodies. The subcutaneous tissue layer has a lower blood supply, which allows semaglutide to enter the bloodstream slowly. Similarly, a semaglutide pill must be metabolized by the gastrointestinal system before entering the bloodstream. The slow absorption of both forms lowers the risk of sudden hypoglycemia in patients. Patients can also take into account (using the Rybelsus and Ozempic patient sites) that the pill (Rybelsus) must be taken once-daily on an empty stomach, as food can hinder its absorption in the stomach. In contrast, the injection (Ozempic) must be administered once-weekly with or without food. 

Effects of Semaglutide on Conditions Associated with T2D

On Obesity 

People with obesity are at a higher risk of being diagnosed with T2D. Thus, it is important to note that semaglutide’s benefits include weight loss. GLP-1RA has been shown to stimulate satiety and reduce hunger and energy intake. These effects may be due to activation of GLP-1 receptors in the hypothalamus, the part of the brain that controls appetite. A study conducted by Blundell et al. (2017) investigated the effects of semaglutide on appetite, energy intake, and body weight in patients with obesity. The study made sure to exclude participants diagnosed with diabetes. Subjects of 18 years or older were randomized to once-weekly subcutaneous semaglutide or a placebo, both 1.0 mg doses, for 12 weeks. Subjects were allowed ad libitum (i.e. unrestricted) meals. The study shows energy intake lowered by 24% across all ad libitum meals with semaglutide versus placebo. Results also indicate lower preferences for high-fat foods and better portion control with semaglutide. Body weight was lowered by about 5.0 kg, which can be attributed to the changes in appetite [3]. Weight loss was also observed in SUSTAIN 1 and PIONEER 1 [2, 9]. These effects in conjunction with enhanced insulin secretion would particularly help obese T2D patients, who are more likely to deal with higher glycemic levels. 

It is worth noting that it is possible to have T2D without being overweight or obese. For example, the Body Mass Index (BMI) cutoff for diabetes screening is lower in some ethnic groups than others. This may be due to genetic factors rather than dietary factors. However, the PIONEER and SUSTAIN trials’ subjects were undergoing diet and exercise prior to screening, indicating that weight loss was a goal. The subjects’ mean BMI was also greater than 30.  Because T2D in non-overweight individuals is less common and harder to detect, there remains a need for more research on how T2D treatments promoting weight loss can affect them. 

On Cardiovascular Disease

T2D increases the chances of developing cardiovascular disease. In fact, it is the leading cause of death in T2D patients [7]. According to the CDC, excess blood glucose damages blood vessels over time, preventing oxygen-rich blood from reaching the heart. A study by Marso et al. (2016) for the SUSTAIN 6 trial investigated the effects of semaglutide on Major Adverse Cardiovascular Events (MACE) versus placebo. The doses administered were the same as in SUSTAIN 1. The trial confirmed the researchers’ hypothesis that semaglutide would be non-inferior to placebo. This is evidenced by a significant 26% decrease in MACE, which are a composite of cardiovascular death, non-fatal stroke, and non-fatal myocardial infarction [7].

Semaglutide with Metformin

Metformin is the preferred first-line treatment for T2D, because it has been well-studied and successfully used as such since the 1950s. It is classified as a biguanide, an oral drug that prevents glucose production in the liver and lowers insulin resistance. Dual therapy of semaglutide (both oral and subcutaneous) added to metformin is of interest because there are many T2D patients who do not see satisfactory results with metformin monotherapy. Semaglutide may be able to provide additional glycemic control. 

The PIONEER 8 trial, conducted by Zinman et al. (2019) investigated the efficacy of oral semaglutide versus placebo in T2D patients taking insulin with or without metformin. The doses administered were the same as in PIONEER 1. Results show that 14 mg of semaglutide with insulin, regardless of the presence of metformin, reduced HbA1c levels by 1.3%, which is significantly greater than the placebo’s effects [10]. An older study by Hausner et al. (2017) explored the effects of subcutaneous semaglutide on metformin in healthy subjects. No significant interactions between the two medications were found [5]. Further research must be conducted to determine whether metformin works better with semaglutide as opposed to on its own. Nevertheless, it is evident that semaglutide can be used in conjunction with metformin safely and without adjustments in dosage. 

Conclusion

The clinical trials referenced in this review have demonstrated that both subcutaneous and oral semaglutide are significantly effective in lowering Hb1Ac levels in T2D patients [2, 4, 9, 10]. In addition, semaglutide has also been proven effective in weight loss and reducing the risk of MACE [3, 7]. Semaglutide’s efficacy is a major advancement in T2D treatment and GLP-1-based therapies because of its diverse functions. Its ability to treat hyperglycemia, obesity, and cardiovascular disease; availability in oral and injection forms; and compatibility with metformin caters to T2D patients with various needs. 

Semaglutide was only recently approved, with more clinical trials being run by Novo Nordisk and other institutions today. Future research should focus on investigating the advantages of oral over subcutaneous forms and metformin-semaglutide dual therapy over metformin monotherapy. These studies would provide deeper insight into determining the best possible treatments for T2D.

 

References:

  1. American Diabetes Association. 2. Classification and Diagnosis of Diabetes: Standards of Medical Care in Diabetes—2020. 2019. Diabetes Care. 43(Supplement 1):S14-S31. doi:10.2337/dc20-s002
  2. Aroda, V. R., Rosenstock, J., Terauchi, Y., Altuntas, Y., Lalic, N. M., Villegas, E. C. M., … Haluzík, M. 2019. PIONEER 1: Randomized Clinical Trial of the Efficacy and Safety of Oral Semaglutide Monotherapy in Comparison With Placebo in Patients With Type 2 Diabetes. Diabetes Care [Internet]. 42(9):1724–1732. doi:10.2337/dc19-0749
  3. Blundell, J., Finlayson, G., Axelsen, M., Flint, A., Gibbons, C., Kvist, T., & Hjerpsted, J. 2017. Effects of once‐weekly semaglutide on appetite, energy intake, control of eating, food preference and body weight in subjects with obesity. Diabetes, Obesity and Metabolism [Internet]. 19(9):1242–1251. doi:10.1111/dom.12932
  4. Davies, M., Pieber, T. R., Hartoft-Nielsen, M.-L., Hansen, O. K. H., Jabbour, S., & Rosenstock, J. 2017. Effect of Oral Semaglutide Compared With Placebo and Subcutaneous Semaglutide on Glycemic Control in Patients With Type 2 Diabetes. Jama [Internet]. 318(15):1460-1470. doi:10.1001/jama.2017.14752
  5. Hausner, H., Karsbøl, J. D., Holst, A. G., Jacobsen, J. B., Wagner, F.-D., Golor, G., & Anderson, T. W. 2017. Effect of Semaglutide on the Pharmacokinetics of Metformin, Warfarin, Atorvastatin and Digoxin in Healthy Subjects. Clinical Pharmacokinetics [Internet]. 56(11):1391–1401. doi:10.1007/s40262-017-0532-6
  6. Knudsen, L. B., & Lau, J. 2019. The Discovery and Development of Liraglutide and Semaglutide. Frontiers in Endocrinology [Internet]. 10. doi:10.3389/fendo.2019.00155
  7. Marso, S. P., Bain, S. C., Consoli, A., Eliaschewitz, F. G., Jódar, E., Leiter, L. A., … Vilsbøll, T. 2016. Semaglutide and Cardiovascular Outcomes in Patients with Type 2 Diabetes. New England Journal of Medicine [Internet]. 375(19):1834–1844. doi:10.1056/nejmoa1607141
  8. Shanik, M. H., Xu, Y., Škrha Jan, Dankner, R., Zick, Y., & Roth, J. 2008. Insulin resistance and hyperinsulinemia. Diabetes Care [Internet]. 31(Supplement_2). doi:10.2337/dc08-s264
  9. Sorli, C., Harashima, S.-I., Tsoukas, G. M., Unger, J., Karsbøl, J. D., Hansen, T., & Bain, S. C. 2017. Efficacy and safety of once-weekly semaglutide monotherapy versus placebo in patients with type 2 diabetes (SUSTAIN 1): a double-blind, randomised, placebo-controlled, parallel-group, multinational, multicentre phase 3a trial. The Lancet Diabetes & Endocrinology [Internet]. 5(4):251–260. doi:10.1016/s2213-8587(17)30013-x
  10. Zinman, B., Aroda, V. R., Buse, J. B., Cariou, B., Harris, S. B., Hoff, S. T., … Araki, E. 2019. Efficacy, Safety, and Tolerability of Oral Semaglutide Versus Placebo Added to Insulin With or Without Metformin in Patients With Type 2 Diabetes: The PIONEER 8 Trial. Diabetes Care [Internet]. 42(12):2262–2271. doi:10.2337/dc19-0898