Home » Articles posted by blbahn (Page 4)

Author Archives: blbahn

Want to Get Involved In Research?

[su_heading size="15" margin="0"]The BioInnovation Group is an undergraduate-run research organization aimed at increasing undergraduate access to research opportunities. We have many programs ranging from research project teams to skills training (BIG-RT) and Journal Club.

If you are an undergraduate interested in gaining research experience and skills training, check out our website (https://bigucd.com/) to see what programs and opportunities we have to offer. In order to stay up to date on our events and offerings, you can sign up for our newsletter. We look forward to having you join us![/su_heading]

Newest Posts

CRISPR Conundrum: Pursuing Consensus on Human Germline Editing

By Daniel Erenstein, Neurobiology, Physiology, and Behavior, ‘21

Author’s Note: In November 2018, a scientist in China became the first person to claim that they had edited the genes of human embryos carried to term. Two twins, named with the pseudonyms Lulu and Nana, were born from these very controversial experiments. This news rapidly propelled the debate on human germline genome editing into the mainstream. My interest in this issue was inspired by my involvement with the Innovative Genomics Institute, located in Berkeley, CA. While attending Berkeley City College during the spring and fall semesters of 2019, I participated in the institute’s CRISPR journal club for undergraduates. Each week, we discussed the latest research from the field of CRISPR gene editing. I also took part in a conference, attended by leading geneticists, bioethicists, philosophers, professors of law and policy, science journalists, and other stakeholders, examining where the consensus, if any, lies on human germline genome editing. Discussions from this conference serve as a foundation for this submission to The Aggie Transcript.

 

New details have emerged in the ongoing controversy that kicked off in November of 2018 when a Chinese biophysicist claimed that, during in vitro fertilization, he had genetically edited two embryos that were later implanted into their mother. Twins, anonymously named Lulu and Nana, are believed to have been born as a result of these experiments. This announcement from He Jiankui in a presentation at the Second International Summit on Human Germline Editing in Hong Kong was largely met with swift condemnation from scientists and bioethicists [1, 2].

Late last year, excerpts of the unpublished research report were made public for the first time since He’s announcement, shedding light on his approach to edit resistance to human immunodeficiency virus, or HIV, into human genomes using CRISPR-Cas9 [3]. CRISPR, short for clustered regularly interspaced short palindromic repeats, are specific patterns in bacterial DNA. Normally, a bacterium that has survived an attack by a bacteriophagea virus that infects bacteria and depends on them in order to reproduce—will catalog bacteriophage DNA by incorporating these viral sequences into their own DNA library. This genetic archive of viral DNA, stored between the palindromic repeats of CRISPR, can be revisited as a reference when the bacterium faces future attacks, aiding in its immune response [4].

To respond effectively, bacteria will transcribe a complementary CRISPR RNA molecule from the existing CRISPR sequence. Using crRNAshort for CRISPR RNAas a guide, CRISPR-associated proteins play the part of a search engine, scanning the cell for any entering viral DNA that matches the crRNA sequence [5]. There are many subtypes of CRISPR-associated proteins [6], but Cas9 is one such type that acts as an enzyme by catalyzing double-stranded breaks in sequences complementary to the guide [7]. This immune system effectively defends against the DNA of invading bacteriophages, protecting the bacterium from succumbing to the virus [5]. 

A cell’s built-in mechanisms typically repair any double-stranded breaks in DNA via one of two processes: nonhomologous end-joining (NHEJ) or homology-directed repair (HDR) [8]. During NHEJ, base pairs might be unintentionally inserted or deleted, causing frameshift mutations called indels in the repaired DNA sequence. These mutations significantly affect the structure and function of any protein encoded by the sequence and can result in a completely nonfunctional gene product. NHEJ is frequently relied upon by gene editing researchers to “knock out” or inactivate certain genes. HDR is less efficient, but the process is often exploited by scientists to “knock in” genes or substitute DNA [9].

CRISPR is programmable, meaning that certain DNA sequences can be easily added to these sites, precisely altering the cell’s genetic code at specific locations. Jiankui He was not the first to use CRISPR to edit the genes of human embryos, but no one was known to have ever performed these experiments on viable embryos intended for a pregnancy. He and two of his colleagues have since been fined and sentenced to prison for falsifying ethical review documents and misinforming doctors, the state-run Chinese news agency Xinhua reported in December 2019 [10]. But He’s experiments supposedly yielded another birth during the second half of 2019 [11], confirmed by China in January [12], and Russian scientist Denis Rebrikov has since expressed strong interest in moving forward with human germline genome editing to explore a potential cure for deafness [13].

Despite what seems like overwhelming opposition to human germline genome editing, He’s work has even generated interest from self-described biohackers like Josiah Zayner, CEO of The ODIN, a company which produces do-it-yourself genetic engineering kits for use at home and in the classroom. 

“As long as the children He Jiankui engineered haven’t been harmed by the experiment, he is just a scientist who forged some documents to convince medical doctors to implant gene-edited embryos,” said Zayner in a STAT opinion reacting to news of He’s sentence [14]. “The 4-minute mile of human genetic engineering has been broken. It will happen again.”

Concerns abound, though, about the use of this technology to cure human diseases. And against the chilling backdrop of a global COVID-19 pandemic, fears run especially high about bad actors using CRISPR gene editing with malicious intent. 

A scientist or biohacker with basic lab know-how could conceivably buy DNA sequences and, using CRISPR, edit them to make an even more panic-inducing bacteria or virus,” said Neal Bear, a television producer and global health lecturer at Harvard Medical School, in a recent STAT opinion [15]. “What’s to stop a rogue scientist from using CRISPR to conjure up an even deadlier version of Ebola or a more transmissible SARS?” 

Into the unknown: understanding off-target effects

In his initial presentation, He said that he had targeted the C-C chemokine receptor type 5 (CCR5) gene, which codes for a receptor on white blood cells recognized by HIV during infection. His presentation suggested that gene editing introduced a known mutation named CCR5Δ32 that changes the receptor enough to at least partially inhibit recognition by HIV. The babies’ father was a carrier of HIV, so this editing was performed to supposedly protect the twins from future HIV infection [16]. 

He’s edits to the CCR5 gene—and human germline genome editing, in generalworry geneticists because the off-target effects of introducing artificial changes into the human gene pool are largely unknown. In a video posted on his lab’s YouTube channel [17], He claimed that follow-up sequencing of the twins’ genomes confirmed that “no gene was changed except the one to prevent HIV infection.”

Excerpts from the unpublished study indicate otherwise, according to an expert asked to comment on He’s research in MIT Technology Review, because any cells taken from the twins to run these sequencing tests were no longer part of the developing embryos [3].

“It is technically impossible to determine whether an edited embryo ‘did not show any off-target mutations’ without destroying that embryo by inspecting every one of its cells,” said Fyodor Urnov, professor of molecular and cell biology at UC Berkeley and gene-editing specialist [3]. “This is a key problem for the entirety of the embryo-editing field, one that the authors sweep under the rug here.”

Urnov’s comments raise concerns about “mosaicism” in the cells of Lulu and Nana—and any other future babies brought to term after germline genome editing during embryonic stages of development. In his experiments, He used preimplantation genetic diagnosis to verify gene editing. Even if the cells tested through this technique showed the intended mutation, though, there is a significant risk that the remaining cells in the embryo were left unedited or that unknown mutations with unforeseeable consequences were introduced [16].

While the CCR5Δ32 mutation has, indeed, been found to be associated with HIV resistance [18, 19], even individuals with both copies of CCR5Δ32 can still be infected with certain strains of HIV [20]. In addition, the CCR5Δ32 mutation is found almost exclusively in certain European populations and in very low frequencies elsewhere, including China [21, 22], amplifying the uncertain risk of introducing this particular mutation into Chinese individuals and the broader Chinese gene pool [16].

Perhaps most shocking to the scientific community is the revelation that He’s experiment did not actually edit the CCR5 gene as intended. In He’s November 2018 presentation, he discussed the rates of mutation via non-homologous end-joining but made no mention of the other repair mechanism, homology-directed repair, which would be used to “knock in” the intended mutation. This “[suggests] that He had no intention of generating the CCR5Δ32 allele,” wrote Haoyi Wang and Hui Yang in a PLoS Biology paper on He’s experiments [16].

Gauging the necessity of germline genome editing

The potential of CRISPR to revolutionize how we treat diseases like cystic fibrosis, sickle cell disease, and muscular dystrophy is frequently discussed in the news; just recently, clinical trials involving a gene-editing treatment for Leber congenital amaurosis, a rare genetic eye disorder, stirred enthusiasm, becoming the first treatment to directly edit DNA while it’s still in the body [23]. While this treatment edits somatic cells—cells that are not passed onto future generations during reproduction—there is increasing demand for the use of germline genome editing as well, even despite the reservations of scientists and bioethicists. 

This begs the question: how will society decide what types of genetic modifications are needed? In the case of He’s experiments, most agree that germline genome editing was an unnecessary strategy to protect against HIV. Assisted reproductive technology (ART), a technique that features washing the father’s sperm of excess seminal fluids before in vitro fertilization (IVF), was used in He’s experiments [3] and has already been established as an effective defense against HIV transmission [24]. Appropriately handling gametesanother word for sperm and egg cellsduring IVF is an additional method used to protect the embryo from viral transmission, according to Jeanne O’Brien, a reproductive endocrinologist at the Shady Grove Fertility Center [3].

“As for considering future immunity to HIV infection, simply avoiding potential risk of HIV exposure suffices for most people,” wrote Wang and Yang in their PLoS Biology paper [16]. “Therefore, editing early embryos does not provide benefits for the babies, while posing potentially serious risks on multiple fronts.”

One such unintended risk of He’s experiments might be increased susceptibility to West Nile virus, an infection thought to be prevented by unmutated copies of the CCR5 receptor [11]. 

In a paper that examines the societal and ethical impacts of human germline genome editing, published last year in The CRISPR Journal [25], authors Jodi Halpern, Sharon O’Hara, Kevin Doxzen, Lea Witkowsky, and Aleksa Owen add that “this mutation may increase vulnerability to other infections such as influenza, creating an undue burden on these offspring, [so] we would opt instead for safer ways to prevent HIV infection.”

The authors go on to propose the implementation of a Human Rights Impact Assessment. This assessment would evaluate germline editing treatments or policies using questions that weigh the benefits of an intervention against its possible risks or its potential to generate discrimination. The ultimate goal of such an assessment would be to “establish robust regulatory frameworks necessary for the global protection of human rights” [25].

Most acknowledge that there are several questions to answer before human germline genome editing should proceed: Should we do it? Which applications of the technology are ethical? How can we govern human germline genome editing? Who has the privilege of making these decisions?

Evaluating consensus on germline genome editing

In late October of last year, scientists, bioethicists, policymakers, patient advocates, and religious leaders gathered with members of the public in Berkeley for a discussion centered around some of these unanswered questions. One of the pioneers of CRISPR gene editing technologies, Jennifer Doudna, is a professor of biochemistry and molecular biology at UC Berkeley, and the Innovative Genomics Institute, which houses Doudna’s lab, organized this CRISPR Consensus? conference in collaboration with the Initiative on Science, Technology, and Human Identity at Arizona State University and the Keystone Policy Center. 

The goal of the conference was to generate conversation about where the consensus, if any, lies on human germline genome editing. One of the conference organizers, J. Benjamin Hurlbut, emphasized the role that bioethics—the study of ethical, social, and legal issues caused by biomedical technologies—should play in considerations of germline genome editing. 

He’s “aim was apparently to race ahead of his scientific competitors but also to reshape and speed up, as he put it, the ethical debate. But speed is surely not what we need in this case,” said Hurlbut, associate professor of biology and society at Arizona State University, at the conference [26].

Central to the debate surrounding consensus is the issue of stakeholders in decision-making about germline genome editing. Experts seem to be divided in their definitions of a stakeholder, with varying opinions about the communities that should be included in governance. They do agree, however, that these discussions are paramount to ensure beneficence and justice, tenets of bioethical thought, for those involved. 

An underlying reason for these concerns is that, should human germline genome editing become widely available in the future, the cost of these therapies might restrict access to certain privileged populations.

“I don’t think it’s far-fetched to say that there’s institutionalized racism that goes on around access to this technology, the democratization and self-governance of it,” said Keolu Fox, a UC San Diego scholar who studies the anthropology of natural selection from a genomics perspective. Fox focused his discussion on indigenous populations when addressing the issue of autonomy in governance of germline genome editing [26]. 

“If we don’t put indigenous people or vulnerable populations in the driver’s seat so that they can really think about the potential applications of this type of technology, self-governance, and how to create intellectual property that has a circular economy that goes back to their community,” Fox said, “that is continued colonialism in 2020.”

Indeed, marginalized communities have experienced the evil that genetics can be used to justify, and millions of lives have been lost throughout human history to ideologies emphasizing genetic purity like eugenics and Nazism. 

“We know that history with genetics is wrought with a lot of wrongdoings and also good intentions that can go wrong, and so there’s a community distrust [of germline editing],” said Billie Liangolou, a UC San Francisco (UCSF) Benioff Children’s Hospital genetic counselor, during a panel on stakeholders that included Fox. Liangolou works with expecting mothers, guiding them through the challenges associated with difficult genetic diagnoses during pregnancy [26].

Others agree that the communities affected most by human germline genome editing should be at the forefront of decision-making about this emerging technology. Sharon Begley, a senior science writer at STAT News, told the conference audience that a mother with a genetic disease once asked her if she could “just change my little drop of the human gene pool so that my children don’t have this terrible thing that I have” [26].

This question, frequently echoed throughout society by other prospective parents, reflects the present-day interest in human germline genome editing technologies, interest that will likely continue to grow as further research on human embryos continues.

In an opinion published by STAT News, Ethan Weiss, a cardiologist and associate professor of medicine at UCSF, acknowledges the concerns of parents faced with these decisions [27]. His daughter, Ruthie, has oculocutaneous albinism, a rare genetic disorder characterized by mutations in the OCA2 gene, which is involved in producing melanin. Necessary for normally functioning vision, melanin is a pigment found in the eyes [28].

Weiss and his partner “believe that had we learned our unborn child had oculocutaneous albinism, Ruthie would not be here today. She would have been filtered out as an embryo or terminated,” he said.

But, in the end, Weiss offers up a cautionary message to readers, encouraging people to “think hard” about the potential effects of human germline genome editing. 

“We know that Ruthie’s presence in this world makes it a better, kinder, more considerate, more patient, and more humane place,” Weiss said. “It is not hard, then, to see that these new technologies bring risk that the world will be less kind, less compassionate, and less patient when there are fewer children like Ruthie. And the kids who inevitably end up with oculocutaneous albinism or other rare diseases will be even less ‘normal’ than they are today.”

Weiss’ warning is underscored by disability rights scholars who say that treating genetic disorders with CRISPR or other germline editing technologies could lead to heightened focus on those who continue to live with these disabilities. In an interview with Katie Hasson of the Center for Genetics and Society, located in Berkeley, Jackie Leach Scully commented on the stigmatization that disabled people might face in a world where germline editing is regularly practiced [29].

“Since only a minority of disability is genetic, even if genome editing eventually becomes a safe and routine technology it won’t eradicate disability,” said Scully, professor of bioethics at the University of New South Wales in Australia. “The concern then would be about the social effects of [heritable genome editing] for people with non-genetic disabilities, and the context that such changes would create for them.” 

Others worry about how to define the boundary between the prevention of genetic diseases and the enhancement of desirable traits—and what this means for the decisions a germline editing governing body would have to make about people’s value in society. Emily Beitiks, associate director of the Paul K. Longmore Institute on Disability at San Francisco State University, is among the community of experts who have raised such concerns [30].

 “Knowing that these choices are being made in a deeply ableist culture,” said Beitiks in an article posted on the Center for Genetics and Society’s blog [30], “illustrates how hard it would be to draw lines about what genetic diseases ‘we’ agree to engineer out of the gene pool and which are allowed to stay.”

Religious leaders have also weighed in on the ethics of human germline genome editing. Father Joseph Tham, who has previously published work on what he calls “the secularization of bioethics,” presented his views on the role of religion in this debate about bioethics at the conference [26].

“Many people in the world belong to some kind of religious tradition, and I think it would be a shame if religion is not a part of this conversation,” said Tham, professor at Regina Apostolorum Pontifical University’s School of Bioethics.

Tham explained that the church already disapproves of IVF techniques, let alone human germline editing, “because in some way it deforms the whole sense of the human sexual act.”

Islamic perspectives on germline editing differ. In a paper published last year, Mohammed Ghaly, one of the conference panelists, discussed how the Islamic religious tradition informs perspectives on human genome editing in the Muslim world [31].

“The mainstream position among Muslim scholars is that before embryos are implanted in the uterus, they do not have the moral status of a human being,” said Ghaly, professor of Islam and biomedical ethics at Hamad Bin Khalifa University. “That is why the scholars find it unproblematic to use them for conducting research with the aim of producing beneficial knowledge.”

Where Muslim religious scholars draw the line, Ghaly says, is at the applications of human germline genome editing, not research about it. Issues regarding the safety and effectiveness of germline editing make its current use in viable human embryos largely untenable, according to the majority of religious scholars [31].

The unfolding, back-and-forth debate about who and how to design policies guiding human germline genome editing continues to rage on, but there is little doubt about consensus on one point. For a technology with effects as far-reaching as this one, time is of the essence.

 

References

  1. Scientist who claims to have made gene-edited babies speaks in Hong Kong . 27 Nov 2018, 36:04 minutes. Global News; [accessed 2 May 2020]. https://youtu.be/0jILo9y71s0.
  2. Cyranoski D. 2018. CRISPR-baby scientist fails to satisfy critics. Nature. 564 (7734): 13-14. 
  3. Regalado A. 2019. China’s CRISPR babies: Read exclusive excerpts from the unseen original research. Cambridge (MA): MIT Technology Review; [accessed 2 May 2020]. https://www.technologyreview.com/2019/12/03/131752/chinas-crispr-babies-read-exclusive-excerpts-he-jiankui-paper/.
  4. Genetic Engineering Will Change Everything Forever – CRISPR . 10 Aug 2016, 16:03 minutes. Kurzgesagt – In a Nutshell; [accessed 2 May 2020]. https://youtu.be/jAhjPd4uNFY
  5. Doudna J. Editing the Code of Life: The Future of Genome Editing [lecture]. 21 Feb 2019. Endowed Elberg Series. Berkeley (CA): Institute for International Studies. https://youtu.be/9Yblg9wDHZA
  6. Haft DH, Selengut J, Mongodin EF, Nelson KE. 2005. A guild of 45 CRISPR-associated (Cas) protein families and multiple CRISPR/Cas subtypes exist in prokaryotic genomes. PLoS Comput Biol. 1 (6): e60. [about 10 pages].
  7. Makarova KS, Koonin EV. 2015. Annotation and Classification of CRISPR-Cas Systems. Methods Mol Biol. 1311: 47-75.
  8. Hsu PD, Lander ES, Zhang F. 2014. Development and applications of CRISPR-Cas9 for genome engineering. Cell. 157 (6): 1262-78.
  9. Enzmann B. 2019. CRISPR Editing is All About DNA Repair Mechanisms. Redwood City (CA): Synthego; [accessed 2 May 2020]. https://www.synthego.com/blog/crispr-dna-repair-pathways.
  10. Normile D. 2019. Chinese scientist who produced genetically altered babies sentenced to 3 years in jail. Science; [accessed 2 May 2020]. https://www.sciencemag.org/news/2019/12/chinese-scientist-who-produced-genetically-altered-babies-sentenced-3-years-jail.
  11. Cyranoski D. 2019. The CRISPR-baby scandal: what’s next for human gene-editing. Nature. 566 (7745): 440-442.
  12. Osborne H. 2020. China confirms three gene edited babies were born through He Jiankui’s experiments. New York City (NY): Newsweek; [accessed 2 May 2020]. https://www.newsweek.com/china-third-gene-edited-baby-1480020.
  13. Cohen J. 2019. Embattled Russian scientist sharpens plans to create gene-edited babies. Science; [accessed 2 May 2020]. https://www.sciencemag.org/news/2019/10/embattled-russian-scientist-sharpens-plans-create-gene-edited-babies#.
  14. Zayner J. 2020. CRISPR babies scientist He Jiankui should not be villainized –– or headed to prison. Boston (MA): STAT News; [accessed 2 May 2020]. https://www.statnews.com/2020/01/02/crispr-babies-scientist-he-jiankui-should-not-be-villainized/.
  15. Baer N. 2020. Covid-19 is scary. Could a rogue scientist use CRISPR to conjure another pandemic? Boston (MA): STAT News; [accessed 2 May 2020]. https://www.statnews.com/2020/03/26/could-rogue-scientist-use-crispr-create-pandemic/.
  16. Wang H, Yang H. 2019. Gene-edited babies: What went wrong and what could go wrong. PLoS Biol 17 (4): e3000224. [about 5 pages].
  17. About Lulu and Nana: Twin Girls Born Healthy After Gene Surgery As Single-Cell Embryos . 25 Nov 2018, 4:43 minutes. The He Lab; [accessed 2 May 2020]. https://youtu.be/th0vnOmFltc.
  18. Samson M, Libert F, Doranz BJ, Rucker J, Liesnard C, Farber CM, Saragosti S, Lapouméroulie C, Cognaux J, Forceille C, et al. 1996. Resistance to HIV-1 infection in caucasian individuals bearing mutant alleles of the CCR-5 chemokine receptor gene. Nature. 382 (6593): 722–5.
  19. Marmor M, Sheppard HW, Donnell D, Bozeman S, Celum C, Buchbinder S, Koblin B, Seage GR. 2001. Homozygous and heterozygous CCR5-Delta32 genotypes are associated with resistance to HIV infection. J Acquir Immune Defic Syndr. 27 (5): 472–81.
  20. Lopalco L. 2010. CCR5: From Natural Resistance to a New Anti-HIV Strategy. Viruses. 2 (2): 574–600.
  21. Martinson JJ, Chapman NH, Rees DC, Liu YT, Clegg JB. 1997. Global distribution of the CCR5 gene 32-base-pair deletion. Nat Genet. 16 (1): 100–3. 
  22. Zhang C, Fu S, Xue Y, Wang Q, Huang X, Wang B, Liu A, Ma L, Yu Y, Shi R, et al. 2002. Distribution of the CCR5 gene 32-basepair deletion in 11 Chinese populations. Anthropol Anz. 60 (3): 267–71.
  23. Sofia M. Yep. They Injected CRISPR Into an Eyeball . 19 May 2020, 8:43 minutes. NPR Short Wave; [accessed 2 May 2020]. https://www.npr.org/2020/03/18/yep-they-injected-crispr-into-an-eyeball.
  24. Zafer M, Horvath H, Mmeje O, van der Poel S, Semprini AE, Rutherford G, Brown J. 2016. Effectiveness of semen washing to prevent human immunodeficiency virus (HIV) transmission and assist pregnancy in HIV-discordant couples: a systematic review and meta-analysis. Fertil Steril. 105 (3): 645–55.
  25. Halpern J, O’Hara S, Doxzen K, Witkowsky L, Owen A. 2019. Societal and Ethical Impacts of Germline Genome Editing: How Can We Secure Human Rights? The CRISPR Journal. 2 (5): 293-298. 
  26. CRISPR Consensus? Public debate and the future of genome editing in human reproduction [conference]. 26 Oct 2019. Berkeley, CA: Innovative Genomics Institute. https://youtu.be/SFrKjItaWGc.
  27. Weiss E. 2020. Should ‘broken’ genes be fixed? My daughter changed the way I think about that question. Boston (MA): STAT News; [accessed 2 May 2020]. https://www.statnews.com/2020/02/21/should-broken-genes-be-fixed-my-daughter-changed-the-way-i-think-about-that-question/.
  28. Grønskov K, Ek J, Brondum-Nielsen K. 2007. Oculocutaneous albinism. Orphanet J Rare Dis. 2 (43). [about 8 pages]. 
  29. Hasson K. 2019. Illness or Identity? A Disability Rights Scholar Comments on the Plan to Use CRISPR to Prevent Deafness. Berkeley (CA): Center for Genetics and Society; [accessed 2 May 2020]. https://www.geneticsandsociety.org/biopolitical-times/illness-or-identity-disability-rights-scholar-comments-plan-use-crispr-prevent.
  30. Beitiks E. 5 Reasons Why We Need People with Disabilities in The CRISPR Debates. San Francisco (CA): Paul K. Longmore Institute on Disability; [accessed 2 May 2020]. https://longmoreinstitute.sfsu.edu/5-reasons-why-we-need-people-disabilities-crispr-debates.
  31. Ghaly M. 2019. Islamic Ethical Perspectives on Human Genome Editing. Issues in Science and Technology. 35 (3): 45-48.

The Role of Dendritic Spine Density in Neuropsychiatric and Learning Disorders

Photo originally by MethoxyRoxy on Wikimedia Commons. No changes. CC License BY-SA 2.5.

By Neha Madugala, Cognitive Science, ‘21

Author’s Note: Last quarter I took Neurobiology (NPB100) with Karen Zito, a professor at UC Davis. I was interested in her research in dendritic spines and its correlation to my personal area of interest in research regarding the language and cognitive deficiencies present in different populations such as individuals with schizophrenia. There seems to be a correlational link between the generation and quantity of dendritic spines and the presence of different neurological disorders. Given the dynamic nature of dendritic spines, current research is studying their exact role and the potential to manipulate these spines in order to impact learning and memory.

 

Introduction 

Dendritic spines are small bulbous protrusions that line the sides of dendrites on a neuron [12]. Dendritic spines serve as a major site of synapses for excitatory neurons, which continue signal propagation in the brain. Relatively little is known about the exact purpose and role of dendritic spines, but as of now, there seems to be a correlation between the concentration of dendritic spines and the presence of different disorders, such as autism spectrum disorders (ASD), schizophrenia, and Alzheimer’s disease. Scientists hypothesize that dendritic spines are a key player in the pathogenesis of various neuropsychiatric disorders [8]. It should be noted that other morphological changes are also observed when comparing individuals with the mentioned neuropsychiatric disorders are compared to neurotypical individuals. However, all these disorders share the common thread of abnormal dendritic spine density.

The main disorders studied in relation to dendritic spine density are autism spectrum disorder (ASD), schizophrenia, and Alzheimer’s disease. Current studies suggest that these disorders result in the number of dendritic spines straying from what is observed in a neurotypical individual. It should be noted that there is a general decline in dendritic spines as an individual ages. However intellectual disabilities and neuropsychiatric disorders seem to alter this density at a more extreme rate. The graph demonstrates the general trend of dendritic spine density for various disorders; however, these trends may slightly vary across individuals with the same disorder. 

 

Dendritic Spines

I. Role of Dendritic Spines

Dendritic spines are protrusions found on certain types of neurons throughout the brain, such as in the cerebellum and cerebral cortex. They were first identified by Ramon y Cajal, who classified them as “thorns or short spines” located nonuniformly along the dendrite [6]. 

The entire human cerebral cortex consists of 1014 dendritic spines. A single dendrite can contain several hundred spines [12]. There is an overall greater density of dendritic spines on peripheral dendrites versus proximal dendrites and the cell body [3]. Their main role is to assist in synapse formation on dendrites. 

Dendritic Spines fall into two categories: persistent and transient spines. Persistent spines are considered ‘memory’ spines, while transient spines are considered ‘learning’ spines. Transient spines are categorized as spines that exist for four days or less and persistent spines as spines that exist for eight days or longer [5].

The dense concentration of spines on dendrites is crucial to the fundamental nature of dendrites. At an excitatory synaptic cleft, the release of the neurotransmitter at excitatory receptors on the postsynaptic cell results in an excitatory postsynaptic potential (EPSP), which causes the cell to fire an action potential. An action potential is where a signal is transmitted from one neuron to another neuron. In order for a neuron to propagate an action potential, there must be an accumulation of positive charge at the synapses, reaching a certain threshold (Figure 2). The cell must reach a certain level of depolarization – a difference in charge across the neuron’s membrane making the inside more positive. A single EPSP may not result in enough depolarization to reach this action potential threshold. As a result, the presence of multiple dendritic spines on the dendrite allows for multiple synapses to be formed and multiple EPSPs to be summated. With the summation of various EPSPs on the dendrites of the neurons, the cell can reach the action potential threshold. The greater density of dendritic spines along the postsynaptic cell allows for more synaptic connections to be formed, increasing the chance of an action potential to occur. 

Figure 2. Firing of Action Potential (EPSP)

  1. Neurotransmitter is released by the presynaptic cell into the synaptic cleft. 
  2. For an EPSP, an excitatory neurotransmitter will be released, which will bind to receptors on the postsynaptic cell. 
  3. The binding of these excitatory neurotransmitters will result in sodium channels opening, allowing sodium to go down its electrical and chemical gradient – depolarizing the cell. 
  4. The EPSPs will be summated at the axon hillock and trigger an action potential. 
  5. This actional potential will cause the firing cell to release a neurotransmitter at its axon terminal, further conveying the electrical signal to other neurons. 

 

II. Creation 

Dendrites initially are formed without spines. As development progresses, the plasma membrane of the dendrite forms protrusions called filopodia. These filopodia then form synapses with axons, and eventually transition from filopodia to dendritic spines [6]. 

The reason behind the creation of dendritic spines is currently unknown. There are a few potential hypotheses. The first hypothesis suggests that the presence of dendritic spines can increase the packing density of synapses, allowing for more potential synapses to be formed. The second hypothesis suggests that their presence can help prevent excitotoxicity, overexcitation of the excitatory receptors (NMDA and AMPA receptors) present on the dendrites. These receptors usually bind with glutamate, a typically excitatory neurotransmitter, released from the presynaptic cell. This can result in damage to the neuron or if more severe, neuronal death. Since dendritic spines compartmentalize charge [3], this feature helps prevent the dendrite from being over-excited beyond the threshold potential for an action potential. Lastly, another hypothesis suggests that the large variation in dendritic spine morphology suggests that these different shapes play a role in modulating how postsynaptic potentials can be processed by the dendrite based on the function of the signal. 

The creation of these dendritic spines is rapid during early development, slowly tapering off as the individual gets older. This process is mostly replaced with the pruning of synapses formed with dendritic spines when the individual is older. Pruning helps improve the signal-to-noise ratio of signals sent within neuronal circuits [3]. The signal-to-noise ratio outlines the ratio of signals sent by neurons and signals actually received by postsynaptic cells. It determines the efficiency of signal transmission. Experimentation has shown that the presence of glutamate and excitatory receptors (such as NMDA and AMPA) can result in the formation of dendritic spines within seconds [3]. The introduction of NMDA and AMPA results in cleavage of intracellular adhesion molecule-5 (ICAM5) from hippocampal neurons. ICAM5 is a “neuronal adhesion molecule that regulates dendritic elongation and spine maturation. [11]” Furthermore, through a combination of fluorescent dye and confocal or two-photon laser scanning microscopy, scientists were able to use imaging technology to witness that spines can undergo minor changes within seconds and more drastic conformational changes, even disappearing over minutes to hours [12]. 

 

III. Morphology

The spine head’s morphology, a large bulbous head connected to a very thin neck that attaches to the dendrite, assists in its role as a postsynaptic cell. This shape allows one synapse at a dendritic spine to be activated and strengthened without influencing neighboring synapses [12]. 

Dendritic spine shape is extremely dynamic, allowing one spine to slightly alter its morphology throughout its lifetime [5]. However, dendritic spine morphology seems to take on a predominant form that is determined by the brain region of its location. For instance, presynaptic neurons from the thalamus take on the mushroom shape, whereas the lateral nucleus of the amygdala have thin spines on their dendrites [2]. The type of neuron and brain region the spine originates from seem to be correlated to the observed morphology. 

The spine contains a postsynaptic density, which consists of neurotransmitter receptors, ion channels, scaffolding proteins, and signaling molecules [12]. In addition to this, the spine has smooth endoplasmic reticulum, which forms stacks called spine apparatus. It further has polyribosomes, hypothesized to be the site of local protein synthesis in these spines, and an actin-based cytoskeleton for structure [12]. The actin-based cytoskeleton makes up for the lack of microtubules and intermediate filaments, which play a crucial role in the structure and transport of most of our animal cells. Furthermore, these spines are capable of compartmentalizing calcium, the ion used at neural synapses that signal the presynaptic cell to release its neurotransmitter into the synaptic cleft [12]. Calcium plays a crucial role in second messenger cascades, influencing neural plasticity [6]. It also plays a role in actin polymerization, which allows for the motile nature of spine morphology [6]. 

There are many various shapes for dendritic spines. The common types are ‘stubby’ (short and thick spines with no neck), ‘thin’ (small head and thin neck), ‘mushroom’ (large head with a constricted neck), and ‘branched’ (two heads branching from the same neck) [12]. 

 

IV. Learning and Memory

Dendritic spines play a crucial role in memory and learning through occurrence of long-term potentiation (LTP), which is thought to be the cellular level of learning and memory. LTP is thought to induce spine formation, which hints at the common correlation that learning is associated with the formation of dendritic spines. Furthermore, LTP is thought to be capable of altering the immature and mature hippocampus, commonly associated with memory [2]. To contrast LTP, long-term depression (LTD) essentially works opposite to LTP – decreasing the dendritic spine density and size [2]. 

The correlation between dendritic spines and learning is relatively unknown. There seems to be a general trend suggesting that the creation of these spines is associated with learning. However, it is unclear whether learning results in the formation of these spines or if the formation of these spines results in learning. The general idea behind this hypothesis is that dendritic spines aid in the formation of synapses, allowing the brain to form more connections. As a result, a decline in these dendritic spines in neuropsychiatric disorders, such as schizophrenia, can inhibit an individual’s ability to learn. This is observed in various cognitive and linguistic deficiencies observed in individuals with schizophrenia. 

Memory is associated with the strengthening and weakening of connections due to LTP and LTD, respectively. The alteration of these spines through LTP and LTD is called activity-dependent plasticity [6]. The main morphological shapes associated with memory are the mushroom spine, a large head with a constricted neck, and the stubby spine, a short and thick spine with no neck [6]. Both of these spines are relatively large, resulting in more stable and enduring connections. These bigger and heavier spines associated with learning are a result of LTP. By contrast, transient spines (live four days or shorter) are usually smaller and more immature in morphology and function, resulting in more temporary and less stable connections. 

LTP and LTD play a crucial role in modifying dendritic spine morphology. Neuropsychiatric disorders can alter these mechanisms resulting in abnormal density and size of these spines.

 

Schizophrenia 

I. What is Schizophrenia? 

Schizophrenia is a mental disorder that results in disordered thinking and behaviors, hallucinations, and delusions [9]. The exact mechanics of schizophrenia are still being studied as researchers are trying to determine the underlying biological reasons behind this disorder and a way to help these individuals. Current treatment is focused on reducing and in some cases treating symptoms of this disorder, but more research and understanding is required to fully treat this mental disorder. 

 

II. Causation

The exact source of schizophrenia seems to lie somewhere between the presence of certain genes and environmental effects. There seems to be a correlation between traumatic or stressful life events during an individual’s adolescence to an increased susceptibility to developing schizophrenia [1]. While research is still underway, certain studies point to cannabis having a role in increasing susceptibility to schizophrenia or worsening symptoms if an individual already has schizophrenia [1]. There seems to be some form of a genetic correlation, given an increased likelihood of developing schizophrenia if present in a family member. This factor seems to result from a combination of genes; however, no genes have been identified yet. There also seems to be a chemical component, given the variation of chemical composition and density of neurotypical individuals and individuals with schizophrenia. Specifically, researchers have observed an elevated amount of dopamine found in individuals with schizophrenia [1]. 

 

III. Relationship between Dendritic Spines and Schizophrenia 

A common thread among most schizophrenia patients is an impairment of pyramidal neuron (prominent cell form found in the cerebral cortex) dendritic morphology, occurring in various regions of the cerebral cortex [7]. Observed in postmortem brain tissue studies, there seems to be a reduced density of dendritic spines in the brains of individuals with schizophrenia. These findings are consistent with various regions of the brain that have been studied, such as the frontal and temporal neocortex, the primary visual cortex, and the subiculum within the hippocampal formation [7]. Out of seven studies observing this finding, the median reported decrease in spine density was 23%, with the overall range of these various studies being a decline of 6.5% to 66% [7]. 

It should be noted that studies were done to see if the decline in spine density was due to the usage of antipsychotic drugs. However animal and human trials showed no significant difference in the dendritic spine density of tested individuals. 

This decline in dendritic spine density is hypothesized to be the result of the failure of the brain of schizophrenic individuals to produce sufficient dendritic spines at birth or if there is a more rapid decline of these spines during adolescence, where the onset of schizophrenia is typically observed [7]. The source of this decline is unclear, but seems to be attributed to deficits in pruning, maintenance, or simply the mechanisms of the underlying formation of these dendritic spines [7]. 

However, there are conflicting results. For instance, Thompson et al. conducted a study that seemed to suggest that a decline in spine density resulted in a progressive decline of gray matter, typically observed in schizophrenic individuals. Thompson et al. conducted an in vivo study of this phenomena. The study used MRI scans for twelve schizophrenic individuals and twelve neurotypical individuals, finding a progressive decline in gray matter – starting in the parietal lobe and expanding out to motor, temporal, and prefrontal areas [10]. The study suggests that the main attribution for this is a decline in dendritic spine density with the progression of the disorder. This study coincides with the previously mentioned hypothesis of a decline of spines during adolescence.  

It is also possible that there is a combination of both of these factors occurring. Most studies have only been able to observe postmortem brain tissue, creating the confusion of whether there is a decline in spines or if the spines are simply not produced in the first place. The lack of in vivo studies makes it difficult to find a concrete trend within data. 

 

Conclusion

While research is still ongoing, current evidence seems to suggest that dendritic spines are a crucial aspect in learning and memory. Their role in these crucial functions has been reflected by their absence in various neuropsychiatric disorders – such as schizophrenia, certain learning deficits present in some individuals with ASD, and memory deficits present in Alzheimer’s disease. These deficits seem to occur during the early creation of neural networks in the brain at synapses. Further research understanding the development of these spines and the creation of different morphological forms can be crucial in determining how to potentially cure or treat these deficiencies present in neuropsychiatric and learning disorders.

 

References

  1. NHS Choices, NHS, www.nhs.uk/conditions/schizophrenia/causes/.
  2. Bourne, Jennifer N, and Kristen M Harris. “Balancing Structure and Function at Hippocampal Dendritic Spines.” Annual Review of Neuroscience, U.S. National Library of Medicine, 2008, www.ncbi.nlm.nih.gov/pmc/articles/PMC2561948/.
  3. “Dendritic Spines: Spectrum: Autism Research News.” Spectrum, www.spectrumnews.org/wiki/dendritic-spines/.
  4. Hofer, Sonja B., and Tobias Bonhoeffer. “Dendritic Spines: The Stuff That Memories Are Made Of?” Current Biology, vol. 20, no. 4, 2010, doi:10.1016/j.cub.2009.12.040.
  5. Holtmaat, Anthony J.G.D., et al. “Transient and Persistent Dendritic Spines in the Neocortex In Vivo.” Neuron, Cell Press, 19 Jan. 2005, www.sciencedirect.com/science/article/pii/S0896627305000048.
  6. McCann, Ruth F, and David A Ross. “A Fragile Balance: Dendritic Spines, Learning, and Memory.” Biological Psychiatry, U.S. National Library of Medicine, 15 July 2017, www.ncbi.nlm.nih.gov/pmc/articles/PMC5712843/.
  7. Moyer, Caitlin E, et al. “Dendritic Spine Alterations in Schizophrenia.” Neuroscience Letters, U.S. National Library of Medicine, 5 Aug. 2015, www.ncbi.nlm.nih.gov/pmc/articles/PMC4454616/.
  8. Penzes, Peter, et al. “Dendritic Spine Pathology in Neuropsychiatric Disorders.” Nature Neuroscience, U.S. National Library of Medicine, Mar. 2011, www.ncbi.nlm.nih.gov/pmc/articles/PMC3530413/.
  9. “Schizophrenia.” Mayo Clinic, Mayo Foundation for Medical Education and Research, 7 Jan. 2020, www.mayoclinic.org/diseases-conditions/schizophrenia/symptoms-causes/syc-20354443.
  10. “Schizophrenia and Dendritic Spines.” Ness Labs, 20 June 2019, nesslabs.com/schizophrenia-dendritic-spines.
  11. “Synaptic Cleft: Anatomy, Structure, Diseases & Functions.” The Human Memory, 17 Oct. 2019, human-memory.net/synaptic-cleft/.
  12. Tian, Li, et al. “Activation of NMDA Receptors Promotes Dendritic Spine Development through MMP-Mediated ICAM-5 Cleavage.” The Journal of Cell Biology, Rockefeller University Press|1, 13 Aug. 2007, www.ncbi.nlm.nih.gov/pmc/articles/PMC2064474/.
  13. Zito, Karen, and Venkatesh N. Murthy. “Dendritic Spines.” Current Biology, vol. 12, no. 1, 2002, doi:10.1016/s0960-9822(01)00636-4.

Einstein’s Fifth Symphony

By Jessie Lau, Biochemistry and Molecular Biology ‘20

Author’s Note: Growing up, playing the piano was a major part of my life— weekdays were filled with hour-long practices while Saturdays were for lessons. My schedule was filled with preparations for board exams and recitals, and in the absence of the black and white keys, my fingers were always tapping away at any surface I could find. My parents always told me learning the piano was good for my education and will put me ahead in school because it will help with my math and critical thinking in the long run. However, I was never able to understand the connection between the ease of reading music and my ability to calculate complex integrals. In this paper, I will extrapolate on the benefits of learning an instrument in cognitive development.

 

Introduction

What do Albert Einstein, Werner Heisenberg, Max Planck, and Barbara McClintock all have in common? Other than their Nobel-prize winning research in their respective fields, all of these scientists share the love of playing a musical instrument. At an early age, Einstein followed after his mother in playing the violin; Heisenberg learned to read music to play the piano at the young age of four; Planck became gifted in playing the organ and piano; McClintock played the tenor banjo in a jazz band during her time at Cornell University [1]. While these researchers spent time honing in on their musical talent, they were engaging both their central and peripheral nervous systems. Playing an instrument requires the coordination of various parts of the brain working together. The motor system gauges meticulous movements to develop sound, which is then picked up by the auditory circuitry. Simultaneously, sensory information picked up by fingers and hands are delivered to the brain. Furthermore, individuals reading music use visual nerves to send information to the brain. This is then processed and interpreted to generate a response carried out by the extremities. All the while, the sound of music elicits an emotional response from the player.

Feedforward and feedback pathways of the brain are two auditory-motor interactions that are elicited while playing an instrument. Feedforward interactions are predictive manners that can influence motor responses. For example, tapping to the rhythm of a beat in anticipation of the upcoming flux and accents in the piece. Alternatively, feedback interactions are particularly important for stringed instruments such as the violin, where pitch changes and requires continuous management [12]. As shown in Figure 1, the musician must auditorily perceive each note and respond with suitably timed motor changes. All of these neurophysiological components raise questions as to how musical training can confer brain development. Longitudinal studies find that musical training can have an expansive benefit in the development of linguistics, executive function, general IQ and academic achievement [2].

 

 

Linguistic Skills

Music shares the same dorsal auditory pathway and processing center in the brain with all other sounds. This passageway is anatomically linked by the arcuate fasciculus, thus suggesting instrumental training will translate to the manifestation of language related skills. This unilateral pathway is central to an array of language related skills, including language development, second-language acquisition, and verbal memory [2]. According to Vaquero et al, “Playing an instrument or speaking multiple languages involve mapping sounds to motor commands, recruiting auditory and motor regions such as the superior temporal gyrus, inferior parietal, inferior frontal and premotor areas, that are organized in the auditory dorsal stream” [10]. 

Researchers studying the effects of acoustic sounds mimicking stop consonant speech on language development find that children learning instruments during the critical developmental period (0-6 years old), build lasting structural and organizational modification in their auditory system that will later affect language skills. Stop consonants include voiceless sounds /p/, /t/, and /k/, as well as voiced sounds /b/, /d/, and /g/. Dr. Strait and her colleagues describe their observations, “Given relationships between subcortical speech-sound distinctions and critical language and reading skills, music training may offer an efficient means of improving auditory processing in young children” [11].

Similarly, Dr. Patel suggests the existence of an overlap of shared brain connections amongst speech and music due to the requirement for accuracy while playing an instrument. Refining this finesse demands attentional training combined with self-motivation and determination. Repeated stimulation of these brain networks garner “emotional reinforcement potential” which is key to “… good performance of musicians in speech processing” [3].

Beyond stimulating auditory neurons, instrumental training has been shown to improve verbal memory. For example, in a comparative analysis, researchers find children who have undergone musical training demonstrate verbal memory advantages compared to their peers without training [4]. Furthermore, following up a year after the initial study, they found that continued practice led to substantial advancement in verbal memory, while those who discontinued failed to show any improvement. This finding is supported by Jakobson et al in which researchers correlate, “… enhanced verbal memory performance in musicians is a byproduct of the effect of music instruction on the development of auditory temporal-order processing abilities” [5].

In the case of acquiring a second language [abbreviated L2], a study conducted on 50 Japanese adults learning English finds, “… the ability to analyze musical sound structure would also likely facilitate the analysis of a novel phonological structure of an L2” [6]. These researchers further elaborate on the potential of improving English syntax with musical exercises concentrating on syntactic processes, such as “… hierarchical relationships between harmonic or melodic musical elements” [6]. Multiple studies have also identified music training to invoke specific structures in the brain also employed during language processing, including Heschl’s gyrus and Broca’s and Wernicke’s areas [2].

While music and language elements are stored in different regions of the brain, the common auditory pathway gives way to instrumental training strengthening linguistic development in multiple areas.

 

Executive Function

Executive function is the application of the prefrontal cortex to carry out tasks requiring conscious effort to attain a goal, particularly in novel scenarios [7]. This umbrella term includes cognitive control in attention and inhibition, working memory, and the ability to task switch. Psychologists Dr. Hannon and Dr. Trainor find formal musical education to be a direct implementation of, “… domain-specific effects on the neural encoding of musical structure, enhancing musical performance, music reading and explicit knowledge of the musical structure” [8]. The combination of domain-general development and executive functioning can influence linguistics, in addition to mathematical development. Whilst learning an instrument, musicians are required to actively read music notes considered to be a unique language and focus on the ability to translate their visual findings into precise mechanical maneuvers, demanding careful focus. All the while, lending attention to identify and remedy errors in harmony, tone, beat, and fingering. Furthermore, becoming well-trained requires scheduled rehearsals to build a foundational framework for operating the instrument while learning new technical elements and building robust spatial awareness. Thus, this explicit practice of executive function during these scheduled practice sessions are essential in sculpting this region of the prefrontal cortex.

 

General IQ and Academic Performance

While listening to music has been found to also confer academic advantage, the active practice of deliberately playing music in an ordered process grants musically apt individuals scholastic benefits absent in their counterparts. In a study conducted by Schellenberg, 144 six year-olds were assigned into one of four groups– music groups included keyboard or voice lessons while control groups provided drama classes or no lesson at all. After 36 weeks, using the Wechsler Intelligence Scale for Children–Third Edition (WISC-III),composed of varying assessments to evaluate intelligence, data supports all four groups having a significant increase in IQ. However, this can also be attributed to the start of grade school. Despite the general rise in IQ, individuals that received keyboard or voice lessons proved a greater jump in IQ. Schellenberg discusses this finding of musically-trained six year-olds demonstrating elevated IQ scores to mirror that of school attendance. He reasons participation in school increases one’s IQ and the smaller the learning setting is, the more academic success the student will be able to achieve. Similarly, music lessons are often taught individually or in small groups which mirrors a school structure, thus ensuing IQ boosts.

 

Possible Confounding Factors

While these studies have found a positive correlation amongst individuals learning musical instruments with varying brain maturation skills, these researchers mark the importance in taking into consideration confounding factors that often cannot be controlled for. These include socioeconomic level, prior IQ, education, and other activities participants are associated with. While many of these researchers worked to gather subjects with similarities in these domains, all of these external elements can play an essential role during one’s developmental period. Moreover, formally learning an instrument is often a financially hefty extracurricular activity, thus more affluent families with higher educational backgrounds can typically afford these programs for their children. 

Furthermore, each study implemented varying practice times, music training durations, and instruments participants were required to learn. The data gathered from these findings can elicit results that may not be reproducible under different parameters.

Beyond these external factors, one must also consider each participant’s willingness to learn the instrument. If one does not hold the desire or motivation to become musically educated, spending the required time playing the instrument does not necessarily correlate to positive results with development as those regions of the brain are not actively utilized.

 

Conclusion

Numerous studies have been carried out to demonstrate the varying benefits music training can provide for cognitive development. Extensive research has proven that the process of physically, mentally, and emotionally engaging oneself to learning an instrument can award diverse advantages to the maturing brain. The discipline and rigor needed to garner expertise in playing a musical instrument is unconsciously translated to the experimental setting. Likewise, the unique melodious tunes found in each piece sparks creativity to propose imaginative visions. While instrumental education does not fully account for Einstein, Heisenberg, Planck and McClintock’s scientific success, this extracurricular activity has been shown to provide a substantial boost in critical thinking.

 

References

  1. “The Symphony of Science.” The Nobel Prize, March 2019. https://www.symphonyofscience.com/vids.
  2. Miendlarzewska, Ewa A., and Wiebke J. Trost. “How Musical Training Affects Cognitive Development: Rhythm, Reward and Other Modulating Variables.” Frontiers in Neuroscience 7 (January 20, 2014). https://doi.org/10.3389/fnins.2013.00279.
  3. Patel, Aniruddh D. “Why Would Musical Training Benefit the Neural Encoding of Speech? The OPERA Hypothesis.” Frontiers in Psychology 2 (June 29, 2011): 1–14. https://doi.org/10.3389/fpsyg.2011.00142.
  4. Ho, Yim-Chi, Mei-Chun Cheung, and Agnes S. Chan. “Music Training Improves Verbal but Not Visual Memory: Cross-Sectional and Longitudinal Explorations in Children.” Neuropsychology 17, no. 3 (August 2003): 439–50. https://doi.org/10.1037/0894-4105.17.3.439.
  5. Jakobson, Lorna S., Lola L. Cuddy, and Andrea R. Kilgour. “Time Tagging: A Key to Musicians Superior Memory.” Music Perception 20, no. 3 (2003): 307–13. https://doi.org/10.1525/mp.2003.20.3.307.
  6. Slevc, L. Robert, and Akira Miyake. “Individual Differences in Second-Language Proficiency.” Psychological Science 17, no. 8 (2006): 675–81. https://doi.org/10.1111/j.1467-9280.2006.01765.x.
  7. Banich, Marie T. “Executive Function.” Current Directions in Psychological Science 18, no. 2 (April 1, 2009): 89–94. https://doi.org/10.1111/j.1467-8721.2009.01615.x.
  8. Hannon, Erin E., and Laurel J. Trainor. “Music Acquisition: Effects of Enculturation and Formal Training on Development.” Trends in Cognitive Sciences 11, no. 11 (November 2007): 466–72. https://doi.org/10.1016/j.tics.2007.08.008.
  9. Schellenberg, E. Glenn. “Music Lessons Enhance IQ.” Psychological Science 15, no. 8 (August 1, 2004): 511–14. https://doi.org/10.1111/j.0956-7976.2004.00711.x.
  10. Vaquero, Lucía, Paul-Noel Rousseau, Diana Vozian, Denise Klein, and Virginia Penhune. “What You Learn & When You Learn It: Impact of Early Bilingual & Music Experience on the Structural Characteristics of Auditory-Motor Pathways.” NeuroImage 213 (2020): 116689. https://doi.org/10.1016/j.neuroimage.2020.116689.
  11. Strait, D. L., S. Oconnell, A. Parbery-Clark, and N. Kraus. “Musicians Enhanced Neural Differentiation of Speech Sounds Arises Early in Life: Developmental Evidence from Ages 3 to 30.” Cerebral Cortex 24, no. 9 (2013): 2512–21. https://doi.org/10.1093/cercor/bht103.
  12. Zatorre, Robert J., Joyce L. Chen, and Virginia B. Penhune. “When the Brain Plays Music: Auditory–Motor Interactions in Music Perception and Production.” Nature Reviews Neuroscience 8, no. 7 (July 2007): 547–58. https://doi.org/10.1038/nrn2152.

Cryogenic Electron Microscopy: A Leap Forward for UC Davis

Photo originally published in Structural Studies of the Giant Mimivirus. PLoS Biol 7(4): e1000092. doi:10.1371/journal.pbio.1000092. License: CC BY 2.5.

By Nathan Levinzon, Neurobiology, Physiology, and Behavior ‘23

Author’s Note: The purpose of this article is to inform the UC Davis community about the arrival and use of a groundbreaking technology to campus. I hope to have provided a comprehensive introduction to Cryo-EM, information on Cryo-EM at UC Davis, and an example of how the technology is already being used to solve problems in biology on campus. I also aim to share my excitement regarding this technology in the hope that I inspire others to pursue this interesting and advancing field of study.

 

Cryo-electron microscopy, often abbreviated as “Cryo-EM,” is a version of microscopy that uses beams of electrons instead of light to illuminate cryogenically frozen samples. Because the wavelength of an electron is much shorter than the wavelength of light, samples can be imaged at mind-boggling resolutions. After the sample is captured in many orientations, the images are compiled in software to finally resolve a three-dimensional image. “If you want to imagine what it’s like to use this technology,” UC Davis Professor Jawdat Al-Bassam explains in an interview for the College of Biological Sciences, “think about walking into a museum, looking at a statue, taking pictures of it, and figuring out how to put those pictures together to get a three-dimensional picture. In essence, that’s what we do with molecules. They are like small molecular statues, and we take images of them at a variety of angles and orientations. We combine these images to get a design plan for how these molecules are put together ” [1].

As a result of recent advances in technology and software, the progress in the resolution of Cryo-EM seems limitless. New microscopes on the market have brought the lowest resolution down to about two  angstroms—twice the diameter of a hydrogen atom—with even higher resolutions yet to come. Before 2010, scientists could achieve maximum resolutions of about four angstroms. This incredible and exciting variant of microscopy stands to shape the future of biological sciences. Dean of Biological Sciences Mark Winey says that “Cryo-EM is certainly part of the portfolio of technology that any campus like UC Davis should have,” and it’s easy to see why [1]. 

One of the most advanced Cryo-EM microscopes on the market today is the newly released Glacios Cryo-Transmission Electron Microscope (TEM) by Thermo-Fisher. On the surface, the Glacios functions like any other TEM: A cryogenically frozen sample is prepared and shot with electrons that hit a camera in order to resolve a high-resolution, black and white image. What makes this microscope different, however, is its groundbreaking camera. The camera has a pixel size slightly smaller than the area that electrons interact with, which enables a high-speed electron detector to find the center of electron events with sub-pixel precision. The end result is a fourfold increase in resolution from older TEMs while simultaneously reducing aliasing, a sampling error caused by electron interference.

With this microscope, researchers can examine life at the molecular level better than ever before. The closed-system design of the microscope ensures a safe and robust pathway through every step of microscopy, from sample preparation and optimization to image acquisition and data processing of up to twelve samples [2]. Its massive throughput is as impressive as its small footprint, allowing for it to be installed in labs with pre-existing infrastructure. Autonomous sample loading and lense alignment have made Cryo-EM faster and easier for both the budding and seasoned scientist. 

UC Davis has recently made a large investment of its own in Cryo-EM. On January 31, 2020, the College of Biological Sciences celebrated the ribbon-cutting for their own ThermoFisher Scientific Glacios Cryo-Transmission Electron Microscope, outfitted with a Gatan K3 direct detector camera. Festivities were short-lived, however, because labs were already in line to use this new machine. Researchers at Professor Al-Bassam’s lab were some of the first to use this microscope while studying kinesin, a motor protein found in eukaryotic cells. By utilizing Cryo-EM to resolve the structure of kinesin, they concluded that kinesin’s tails open a part of the motor that encapsulates ATP, slowing the movement of these motors and allowing kinesin to cluster and work together. With this new microscope in hand, these researchers are now able to unravel the functions of kinesin and how it interacts with other kinesin to move and group. The complete paper discussing the binding between kinesin tail and motor domains and its function in microtubule sliding can be found in the January 2020 edition of eLife [3]. 

Cryo-EM has never been easier, safer, and more accessible to use UC Davis. With the purchase of the Glacios, UC Davis has made itself ready to introduce a new generation of researchers to the field of modern biology. Resolutions that were thought impossible ten years ago are now a reality, and new advancements continue to push the bounds at which samples can be imaged at. With the quickening pace of advancements in Cryo-EM, there is no telling what mysteries researchers at UC Davis will uncover next.

 

References

  1. Slipher, David, et al. “CRYO EM: Unleashing the Future of Biology at UC Davis.” UC Davis College of Biological Sciences, 31 Jan. 2020, biology.ucdavis.edu/cryo-em. Accessed 23 Mar. 2020.
  2. “Cryo TEM: Cryo-EM.” Thermo Fisher Scientific – US, www.thermofisher.com/us/en/home/electron-microscopy/products/transmission-electron-microscopes/glacios-cryo-tem.html.
  3. Bodrug, Tatyana, et al. “The Kinesin-5 Tail Domain Directly Modulates the Mechanochemical Cycle of the Motor Domain for Anti-Parallel Microtubule Sliding.” ELife, vol. 9, 2020, doi:10.7554/elife.51131.

Ode to the Eye: Movement of Mitochondria in Retinal Ganglion Cells

By Nicholas Garaffo, Biochemistry and Molecular Biology, 20’

Author Note: I wrote this paper in an attempt to connect my research project to a non-science audience. While this topic is very scientific, I am attempting to translate the molecular biology of the eye to a language any reader could understand. With this paper, I hope more people get interested in basic biology, and have a new appreciation for the eye.

 

Ode to the Eye

Right now, your irises are contracting, folding, and manipulating to adjust the amount of light allowed in. Photons are reflected from these words, move through your pupil, bypass the aqueous cavity within your eye, and are absorbed by the .2 mm thick retinal cell layer inside your eye (1, 2). These photons are scattered, and absorbed by the photoreceptor cells; these are known as rods and cones. Once absorbed, the cells undergo a rapid change in their membrane potential allowing the signal to transport along its axon. The signal is released from the photoreceptors and received by the bipolar cells which then undergo the same process. Hundreds of photoreceptor cells connect to a single bipolar cell, and hundreds of bipolar cells connect to a single retinal ganglion cell (RGC) (1, 2). RGCs are the bridge between the eye and brain. Without these cells the light ends as a signal, and is never used to create an image. All of this is happening as fast as you can read these words; what a beautiful thing the eye is!

 

Introduction

The cells within the eye are co-dependent for its overall performance, yet even the smallest alterations can be detrimental. As a fluid filled cavity, the eye expands and contracts in response to external pressures. This is a normal process because every time you blink, rub your eye, or sneeze the pressure within the eye– intraocular pressure (IOP)– spikes. An IOP above 22 mmHg (16-22 mmHg is thought to be physiologically normal) can occur when the muscles, known as the ciliary bodies, responsible for flushing and recycling the internal fluid get clogged (3). Fluid will then begin to increase within the eye, and cause the cavity to expand. While spikes in IOP are rarely damaging, prolonged exposure to high IOP can strain RGCs. Recall, RGCs are the bridge between the eye and the brain. RGCs exit the eye through a pore in the back of the eye, known as the optic nerve head (ONH).  To protect and accelerate the signal from the eye to the brain, RGC’s form a tubular structure with other cell types. This structure is called the optic nerve and is composed of RGC’s, blood vasculature, and microglia– a large family of neuronal support cells, but for the focus of this review we will only focus on specifically the astrocytes.

The main area for RGC damage is the ONH, the connecting area of the eye to the optic nerve (2, 3). Like all neuron cells, the axons of RGCs are heavily myelinated– a fatty sheath to increase electrical signaling– however, the RGCs which exit through the eye must remain unmyelinated to maintain the eye’s dynamic motions. These ONH RGCs are most vulnerable to variations in IOP. When the eye’s IOP increases for a prolonged period of time, the ONH and its corresponding cells are pulled. This increase in tension causes cellular strain, and as a consequence, glaucoma– an irreversible blindness commonly attributed to a prolonged increase in IOP (3). Patients first notice blind spots in the periphery, and then the blind spot begins to rainbow across their vision until it elapses the entire eye. Currently, there is no cure, and the main treatments are to decrease IOP, but regulating IOP serves to prolong vision rather than prevent glaucoma.

Interestingly, only 25-50% of all patients with glaucoma have high IOP, and patients with high IOP do not always get glaucoma (2, 3). Increases in IOP may be a correlation with glaucoma rather than a cause of it. Therefore, it is of extreme importance to understand overall RGC health through other methods. Specific research is focused on how debris, including fats, organelles and degraded protein, is moved throughout the optic nerve. 

 

Astrocytes and Retinal Ganglion Cells

Astrocytes are a cell-type within the glial system that interact with neurons to provide metabolic support, signaling and maintain cellular homeostasis. Throughout the entire neuronal network, (brain, spine, optic nerve, etc.) neurons do not exist in isolation. Within the brain, astrocytes are responsible for sending local and wide-ranging signals which actually assist in neuronal communication (4). Within the optic nerve, astrocytes surround every part of RGC that is not covered in myelin to assist similar activities. Astrocytes actually out-number RGCs within the optic nerve.

Recall that neurons function by sending neurotransmitters, most commonly glutamic acid, across synapses to relay information cell to cell. This process is no different for RGCs. Each signal must remain short, and sharp to ensure proper communication. One function that astrocytes play is to uptake lingering glutamic acid when a signal is released thereby preventing misfiring. The sequestered glutamic acid will be converted to glutamine within the astrocyte and sent back to the RGC for future signaling (4). This is only one of the hundreds of functions that astrocytes help RGC’s with.

 

Transmitophagy and Implications with Glaucoma

Mitochondria, colloquially known as the power-house of the cell, are an indicator of overall cell health because abnormal or reduced amounts of mitochondria are symptomatic of degratory diseases, including parkinson’s disease and glaucoma. Mitochondria normally function to provide the cell with ample ATP through the electron-transport chain which requires a reactive oxygen species (ROS) and electron potential forces (5). As mitochondria age, their ROS begin to interact with their intracellular proteins, including that of the electron transport chain. When the damage accumulates, the mitochondria undergoes fission (i.e. pinching off a piece) to be degraded via the lysosome, or fusion (i.e. fuse to a large, healthy mitochondria) (5). For the context of this article we will only be looking at lysosomal mediated degradation of mitochondria, and mitochondria fragments. This process is known as mitophagy, which is a subset of autophagy. One assumption made with autophagy (or, ‘self-eating’) is that cells will degrade their own organelles. While this may be the case for the majority of degradative processes, there is evidence which supports RGC protrusion of mitochondria to be degraded by neighboring astrocytes, a novel process termed transmitophagy.

In 2014, Chung-Ha et al. provided a new model of mitochondria degradation within ONH RGC’s (6). This study used the Xenopus laevis as their model, and used a series of mitochondria, lysosomal, and astrocytic tags to track mitochondria movement and degradation within the optic nerve. Ultimately, they found fragmented and malformed mitochondria protruding out from the RGC axon, and getting picked up by neighboring astrocytes for degradation (6). Axonal mitochondria are specifically vulnerable to this process because it is an energy costly, and dangerous process to move an ROS-producing mitochondria to the cell soma, where the majority of lysosomes reside.

 

Conclusion

The eye is a beautiful, yet intricate structure that is dependent on overall cell and organelle health. The mitochondria are perhaps the most important for cellular metabolism and overall health. While it was previously thought that mitochondria are only degraded within the cell, transmitophagy illustrates a potential route that axonal mitochondria can undergo for degradation. However, transmitophagy studies have only been presented in non-disease models. Therefore, future studies must utilize disease models (i.e. induced glaucoma models) to understand how transmitophagy is affected, or affects, eye diseases.

 

Citations

  1. Kolb H. Gross Anatomy of the Eye. 2005 May 1. “The Organization of the Retina and Visual System. Salt Lake City (UT): University of Utah Health Sciences Center; 1995.
  2. Sung, Ching-Hwa, and Jen-Zen Chuang. “The cell biology of vision.” The Journal of cell biology vol. 190,6 (2010): 953-63. doi:10.1083/jcb.201006020
  3. Weinreb, Robert N et al. “The pathophysiology and treatment of glaucoma: a review.” JAMA vol. 311,18 (2014): 1901-11. doi:10.1001/jama.2014.3192
  4. Sofroniew, Michael V, and Harry V Vinters. “Astrocytes: biology and pathology.” Acta neuropathologica vol. 119,1 (2010): 7-35. doi:10.1007/s00401-009-0619-8
  5. Pickrell, Alicia M, and Richard J Youle. “The roles of PINK1, parkin, and mitochondrial fidelity in Parkinson’s disease.” Neuron vol. 85,2 (2015): 257-73. doi:10.1016/j.neuron.2014.12.007
  6. Davis, Chung-ha O et al. “Transcellular degradation of axonal mitochondria.” Proceedings of the National Academy of Sciences of the United States of America vol. 111,26 (2014): 9633-8. doi:10.1073/pnas.1404651111

The Roots of Chemistry: How the Ancient Tradition of Alchemy Influenced Modern Scientific Thought

By Reshma Kolala, Biochemistry & Molecular Biology 22’

Author’s Note: A scientific education today often omits the origins of modern scientific thought. I was interested in understanding how early philosophers built the foundation of modern scientific disciplines such as chemistry and physics through the ancient tradition of alchemy alongside rational thought and reasoning. 

 

The ancestral equivalents of many modern branches of science have shaped the face of scientific innovation. Alchemy, the predecessor of modern chemistry, has influenced the discovery of several scientific concepts and experimental methodologies that have constructed the foundational basis of empirical science. 

Alchemy had roots in philosophy, astronomy, and religion. It spanned beyond empirical science, combining spirituality with experimental observation to decipher the intricacies of nature. Alchemy was infatuated with the creation of new materials, such as transmutation of base metals into precious metals such as gold [1]. Alchemists also strived to uncover or create a universal elixir, “[a] substance that would indefinitely prolong life” [2].  The element of spirituality, specifically the belief in ultimate divine perfection sustained these ideals. Because it was believed that nature always strives to achieve perfection, the transmutation of say, lead, into gold, was considered to simply be a matter of chemical catalyzation. This required an understanding of the composition and complexities of the natural world. In doing so, alchemists contributed to an incredible diversity of what would be later considered as major chemical industries such as metallurgy, the production of paints, inks, and dyes, and cosmetics [3].

Alchemy can be traced back to ancient Egypt, where Jabir Ibn Hayyan, a court alchemist and physician, was the first to introduce experimental methodology into alchemy and is credited with the invention of several chemical processes used in modern chemistry. These include, “crystallization, calcinations, sublimation and evaporation, the synthesis of acids (hydrochloric, nitric citric, acetic and tartaric acids), and distillation. [4]”  Hayyan applied this knowledge to improve manufacturing processes that allowed advancements in major industries both then and today, including glass-making, the development of steel, the dyeing of cloth, and the prevention of rust. Hayyan’s contribution to alchemy paralleled the previously developed Aristotelian theory of elements which suggested the existence of four core elements: earth, water, air, and fire. Hayyan suggested the existence of different categories of matter, including spirits (which vaporize upon heating), metals, and stones (which can be converted into powder). Jabir’s work laid the foundation for the structured classification of chemical substances. His practice and encouragement of systematic experimentation began to transform alchemy from a superstitious practice to a proper scientific discipline.

Compared to European alchemy, Chinese alchemy had a more obvious application to medicine and was influenced by Taoism, a philosophical and religious tradition of living in harmony with the natural order of the universe, and traditional Chinese medicine. Acupuncture, Tai Chi, and meditation focus on the purification of the spirit in hopes of achieving immortality, a core value in alchemy [5]. In an attempt to uncover an elixir for eternal life, Chinese alchemists accidentally invented gunpowder, which would go on to have major social and political implications [6].

 

The Decline of Alchemy and Rise of Modern Chemistry

Alchemy regained popularity in Renaissance Europe and influenced many modern scientists, including Issac Newton and Robert Boyle, both of which were also alchemists. Considered as the father of chemistry, Robert Boyle is most notably known for Boyle’s law, which observed the inverse relationship between the volume of a gas and its pressure. Boyle, however, was far from a scientist in the modern sense and was considered to be a natural philosopher. Boyle was interested in transmutation and constructed the “corpuscularian hypothesis” in which he describes all matter consisting of varied arrangements of identical “corpuscles,” known today as particles [7]. According to his theory, Boyle believed that transmutation was just a matter of rearrangement. Boyle wrote The Sceptical Chymist to assert his hypothesis, officially establishing chemistry as the science of the composition of substances. This marked the official separation of modern chemistry from the mystical qualities of alchemy. Through the span of several millennia, alchemists “were learning fundamental principles of chemistry: breaking down ores, dissolving metals with acids, and precipitating metals out of solution [8].” This laid the foundations of basic scientific experimentation with modern alchemists such as Boyle emphasizing the importance of consistent and accurate results. This pioneered the development of chemical analysis and the scientific method.  Boyle also rejected the Aristotelian theory of elements and recognized that certain substances decompose into other substances. This brought forth the first conceptions of a chemical element, a state of matter that cannot be further decomposed [9]. Despite denouncing mysticism, Boyle remained an alchemist and believed, correctly, that one element could be transmuted to another through rearrangement of the basic particles making up the element. This was achieved by Ernest Rutherford in 1919 when he transformed nitrogen into oxygen by aiming alpha particles at nitrogen atoms. This resulted in the formation of hydrogen and oxygen atoms, establishing the first man-made nuclear reaction [10]. Rutherford is considered a father of nuclear physics, illustrating the multidisciplinary influence of alchemy in many modern sciences. 

Alchemic practice also had implications in medicine. Philippus Paracelsus, a prominent Swiss physician, applied general alchemic principles to a more realistic model such as the human body. Similar to the idea of transmutation, he believed that organs could be transformed from sick to healthy, implying the use of chemicals to treat illness. Paracelsus pioneered the integration of chemicals and bodily medicine in what would later develop as toxicology [11]. This launched an entirely new branch of science where inorganic materials were used in conjunction with the human body, including the use of mercury to treat syphilis [12]. Paracelsus is also known for his creation of laudanum, otherwise known as opium [13]. The most active substance in opium is morphine, which is a powerful painkiller and is used for anesthetic purposes .

The rise of modern chemistry does not mark the dissolution of alchemy but rather symbolizes a departure from the occultism of the ancient tradition to embrace a more empirical method of scientific discovery. Although alchemy is considered to be an ancient science, it can be regarded as a necessary precursor to the development of modern chemistry and it continues to have implications on scientific discovery today. 

 

References

  1. King, P. (2007). Routledge encyclopedia of philosophy online: all site license & consortia/ .. Place of publication not identified: Routledge.
  2. The Editors of Encyclopaedia Britannica. (2007, December 13). Elixir. Retrieved from https://www.britannica.com/topic/elixir-alchemy
  3. Zimdahl, R. L. (2015). Six Chemicals That Changed Agriculture. Academic Press.
  4. Amr, S. S., & Tbakhi, A. (2007). Jabir ibn Hayyan. Retrieved from  https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6077026/
  5. An Introduction to Taoist Alchemy. (n.d.). Retrieved from https://www.goldenelixir.com/jindan/jindan_intro.html
  6. Szczepanski, K. (2019, July 3). How China Invented Gunpowder. Retrieved from https://www.thoughtco.com/invention-of-gunpowder-195160
  7. Corpuscularian hypothesis. (n.d.). Retrieved from https://www.britannica.com/science/corpuscularian-hypothesis
  8. Principe, L. (2007). Chymists and chymistry: studies in the history of alchemy and early modern chemistry. Sagamore Beach, MA: Science History Publications/USA, a division of Watson Publishing International.
  9. Home. (n.d.). Retrieved from https://www.famousscientists.org/robert-boyle/
  10. Rutherford, transmutation and the proton. (2019, June 26). Retrieved from https://cerncourier.com/a/rutherford-transmutation-and-the-proton/
  11. F., J. (2000, January 1). Paracelsus: Herald of Modern Toxicology. Retrieved from https://academic.oup.com/toxsci/article/53/1/2/1673334
  12. Stillman, J. M. (1920). Theophrastus Bombastus von Hohenheim called Paracelsus: his personality and influence as physician, chemist and reformer. Chicago: The Open court Publishing Co.
  13. Sigerist, H. E. (1941). Laudanum in the works of Paracelsus.

Taking the Driver’s Seat in your Diagnosis

By: Mari Hoffman, Genetics and Genomics 2021 

Author’s Note: In this paper, I will be discussing reviews on patient activation level and health outcomes in chronic diseases. I wanted to analyze the effect patients can have on their own treatment plans and discuss how they can make a difference. I feel personally connected to this topic because my dad was diagnosed with chronic lymphocytic leukemia and has been a role model for me in his journey of his treatment plan.  

 

Patient involvement and education in their diagnosis is not a novel idea, but has been shown to play a role in overall patient experience [1, 2]. Chronic illnesses create responsibilities and demands on patients to manage and understand their care and diagnosis. It has been shown that chronic disease patients who take on a bigger role and engagement level with their own health have more positive outcomes [3]. Patient engagement or patient activation can be defined as “the individual’s knowledge, skill, and confidence for managing their own health” [3].

Patients who are more involved in their diagnosis and treatment have been shown to have more positive outcomes. One study used survey data from cancer patients and assessed how patient activation level affected actions taken on by the patient, communication with the doctor, and overall satisfaction with their care and treatment [3]. The study used survey data that was collected by CancerCare who sent out six different online surveys to cancer patients in order to test their patient activation measure [3]. The survey was sent out to a sample that consisted of those who were 25 years or older and had received a cancer diagnosis [3]. The study population varied in different characteristics and after controlling for demographics and health status factors, the study found that patients who scored higher in their activation level were 4.7 times more likely to start exercise and 3.3 times more likely to start a healthier diet when compared to patients who scored less in their activation level [3]. The study also found that less activated patients had a lower score in following their doctor’s recommendations, discussing side effects with their doctor, and in their overall satisfaction of their care received [3]. As discussed above, patients who scored higher on their activation level are more likely to be better informed on their treatment options and have greater proactivity in managing their condition [3].

There are a wide range of different factors that have been found to affect a patient’s interest in participating in their health care decision-making. These factors are related to demographic, personal characteristics, ability to put time in, stage and severity of disease, and the influence from the practitioner [4]. There are many reasons not stated above why one may or may not involve themself in their own treatment decisions and plans. 

Information about one’s diagnosis and care treatment is one of the ways in which a patient can educate themself. This information enables people to understand and exchange with their healthcare provider on consequential decisions [4]. Education of the disease has been shown to increase the patient’s willingness to ask questions because they are more confident in their understanding and therefore participate in a more active role in their treatment plan [4]. Decreasing the gap in education on their diagnosis also leads patients to better understand their own personal requirements with regard to their treatment and personalized treatment in terms of exercise and diet, and trust in their doctor to take their recommendations [4]. 

Patient education and proactiveness can even lead to a patient becoming a driver in their own diagnosis. In January of 2016, my dad was diagnosed with chronic lymphocytic leukemia by a general hematologist and was put on a “watch and wait” approach. This approach essentially means that a patient’s condition is monitored without receiving any treatment until there is a change in symptoms [5]. This made sense for my dad since his cancer was slow moving and he did not have many symptoms. Around six months later, the hematologist said he needed to be treated with fludarabine, cyclophosphamide, and rituximab (FCR), which is a Cytotoxic Chemotherapy. The doctor gave no real explanation for why he had to be treated at that time and regarding FCR, he simply said “it’s the gold standard.” At this time, my dad had started to get connected with CLL support groups such as the CLL Society and decided to get a second opinion. Through the process of educating himself and receiving a second opinion, he realized there were many negative side effects that came with FCR that his doctor did not inform him about, and he would need his genetics tested to even see if he was compatible with the treatment. When he brought up his genetics to the doctor, the doctor responded saying he thought they already did that. My dad left the appointment feeling shocked and realizing he needed to be informed and educated on his treatment options if he wanted the best possible treatment. After doing genetic testing, he found over 50% of his cells were 17P deleted and he also had Trisomy 12. This meant his genetics would not be compatible with the treatment plan. It was becoming apparent to my dad that new non-cytotoxic treatments were superior for most CLL patients. He decided to take the initiative to continue to “watch and wait” and explore other options. 

After a couple of months, his symptoms started to progress and he got an appointment with a doctor at University of California, San Diego (UCSD) who told him that there was a clinical trial happening that could be a potential treatment option for his disease. Through his own research and the resources he found through the CLL Society website, support groups, and UCSD, he decided this was the right treatment for him. It was good that he chose to wait rather than take the initial treatment offered; if he chose the latter, he would have not qualified for the trial. He has been on the clinical trial with Venetoclax and Ibrutinib for about two years now and has shown normal numbers in terms of his white blood count, which is used to measure the presence of CLL. Through the resources provided to him, he was able to gain knowledge and connections with experts in the field to feel confident in his decision to find a treatment plan that worked for him. My dad is now very involved with the CLL Society and founded a local CLL support group in San Diego where they meet to discuss their experiences and bring in health professionals to lead discussions. 

My dad’s personal story and the data shown above shows how imperative it is to do research and educate yourself on your own condition. It is critical to get your main information and opinions from your doctor, and to always consider a second opinion. Evidently, educating yourself on your own health and treatment plans can have beneficial effects overall, but it is critical to remember that doctors and health care professionals are trained in their field. It is very important to use your education and resources to find a specialist in your disease and start a conversation with them. Although you may not have all the resources in the beginning, the best advocate for your health and future is yourself. Use all the resources you can to continue to be informed and in touch with the professionals in the study of your disease. 

 

References: 

  1. Thompson, Andrew G.h. “The Meaning of Patient Involvement and Participation in Health Care Consultations: A Taxonomy.” Social Science & Medicine, vol. 64, no. 6, 2007, pp. 1297–1310., doi:10.1016/j.socscimed.2006.11.002.
  2. Hibbard, Judith H., and Jessica Greene. “What The Evidence Shows About Patient Activation: Better Health Outcomes And Care Experiences; Fewer Data On Costs.” Health Affairs, vol. 32, no. 2, 2013, pp. 207–214., doi:10.1377/hlthaff.2012.1061.
  3. Hibbard, Judith H., et al. “Does Patient Activation Level Affect the Cancer Patient Journey?” Patient Education and Counseling, vol. 100, no. 7, 2017, pp. 1276–1279., doi:10.1016/j.pec.2017.03.019.
  4. Vahdat, Shaghayegh et al. “Patient involvement in health care decision making: a review.” Iranian Red Crescent medical journal vol. 16,1 (2014): e12454. doi:10.5812/ircmj.12454

Applications of Machine Learning in Precision Medicine

By Aditi Goyal, Statistics, Genetics and Genomics, ‘22

Author’s Note: I wrote about this topic after being introduced to the idea through a speaker series. I think the applications of modern day computer science, genetics and statistics creates a fascinating crossroads between these academic fields, and the applications are simply astounding.

 

Next Generation Sequencing (NGS) has revolutionized the field of clinical genomics and diagnostic genetic tests. Now that sequencing technologies can be easily accessed and results can be obtained relatively quickly, several scientists and companies are relying on this technology to learn more about genetic variation. There is just one problem: magnitude. NGS and other genome sequencing methods generate data sets in the size of billions. As a result, simple pairwise comparisons of genetic data that have served scientists well in the past, cannot be applied in a meaningful manner to these data sets [1]. Consequently, in efforts to make sense of these data sets, artificial intelligence (AI), also known as deep learning or machine learning, has introduced itself to the biological sciences. Using AI, and its adaptive nature, scientists can design algorithms aimed to identify meaningful patterns within genomes and to highlight key variations. Ideally, with a large enough learning data set, and with a powerful enough computer, AI will be able to pick out significant genetic variations like markers for different types of cancer, multi-gene mutations that contribute to complex diseases like diabetes, and essentially provide geneticists with the information they need to eradicate these diseases, before they manifest in the patient. 

The formal definition for AI is simply “the capability of a machine to imitate intelligent human behavior” [2]. But what exactly does that imply? The key feature of AI is simply that it is able to make decisions, much like a human would, based on previous knowledge and the results from past decisions. AI algorithms are designed to take in information, generate patterns from that information, and apply it to new data, about which we know very little about. Using its adaptive strategies, AI is able to “learn as it goes,” by fine-tuning its decision-making process with every new piece of data provided to it, eventually making it the ultimate decision-making tool. While this may sound highly futuristic, AI has been used for several years in applications throughout our daily lives from the self-driving cars being tested in the Silicon Valley, to the voice recognition program available on every smartphone today. Most chess fans will remember the iconic “Deep Blue vs Kasparov” match, where Carnegie Mellon students developed an IBM supercomputer using a basic AI algorithm designed to compete against the reigning chess champion of the world [3]. Back then, in 1997, this algorithm was revolutionary, as it was one of the major signs that AI was on par with human intelligence. [4]. Obviously, there is no question that AI has immense potential to be applied in the field of genomics. 

Before we can begin to understand what AI can do, it is important to understand how AI works. Generally speaking, there are two ways AI algorithms are developed: supervised and unsupervised learning. The key difference between the two groups is that in supervised learning, the data sets we provide to AI to “learn” are data sets that we have already analyzed and understand. In other words, we already know what the output will be, before providing it to AI [5]. The goal, therefore, is for the AI algorithm to generate an output as close to our prior knowledge as possible. Eventually, by using larger and more complex data sets, the algorithm will have modified itself enough to the point where it does the job of the data scientist, but is capable of doing so on a much larger scale. Unsupervised learning, on the other hand, does not have a set output predefined. So, in a sense, the user is learning along with the algorithm. This technique is useful when we want to find patterns or define clusters within our data set without predefining what those patterns or clusters will be. For the purposes of genomic studies, scientists use unsupervised learning patterns to analyze their data sets. This is beneficial over supervised learning, since the gigantic data sets produced by omics studies are difficult to fully understand.

Some of the clearest applications of AI in biology are in cancer biology, especially for diagnosing cancer [6].AI has outperformed expert pathologists and dermatologists in diagnosing metastatic breast cancer, melanoma, and several eye diseases. AI also contributes to innovations in liquid biopsies and pharmacogenomics, which will revolutionize cancer screening and monitoring, and improve the prediction of adverse events and patient outcomes” [7]. By providing a data set of genomic or transcriptomic information, we can develop an AI program that is designed to identify key variations within the data. The problem lies, primarily, in providing the initial data set. 

In the 21st century, an era of data hacks and privacy breaches, the general public is not keen to release their private information, especially when this information contains everything about their medical history. Because of this, “Research has suffered for lack of data scale, scope, and depth, including insufficient ethnic and gender diversity, datasets that lack environment and lifestyle data, and snapshots-in-time versus longitudinal data. Artificial intelligence is starved for data that reflects population diversity and real-world information” [8]. The ultimate goal of using AI is to identify markers and genetic patterns that can be used to treat or diagnose a genetic disease. However, until we have data that accurately represents the patient, this cannot be achieved. A study in 2016 showed that 80% of participants of Genome Wide Association Study (GWAS) were of European descent [9]. At first glance, the impacts of this may not be so clear. But when a disease such as sickle cell anemia is considered, the disparity becomes more relevant. Sickle cell anemia is a condition where red blood cells are not disk-shaped, as they are in most individuals, but rather in the shape of a sickle, which reduces their surface area, which in turn reduces their ability to carry oxygen around the body. This is a condition that disproportionately affects people of African descent, so it is not reasonable to expect to be able to find a genetic marker or cure for this disease when the data set does not accurately reflect this population.

Another key issue is privacy laws. While it is important to note that any genomic data released to a federal agency such as the NIH for research purposes will be de-identified, meaning that the patient will be made anonymous, studies have shown that people can be re-identified using their genomic data, the remaining identifiers still attached to their genome, and the availability of genealogical data and public records [10]. Additionally, once your data is obtained, policies like the Genetic Information Nondiscrimination Act do protect you in some ways, but these pieces of legislation are not all-encompassing, and still leave the window open for some forms of genetic discrimination, such as school admissions. The agencies conducting research have the infrastructure to store and protect patient data, but in the era of data leaks and security breaches, there are some serious concerns that need to be addressed.

Ultimately, AI in genomics could transform the world within a matter of days, allowing  Modern biology, defined by the innovation of NGS technologies, has redefined what is possible. Every day, scientists all around the world generate data sets larger than ever before, making a system to understand them all the more necessary. AI could be the solution, but before any scientific revolution happens, it is vital that the legislation protecting citizens and their private medical information be updated to reflect the technology of the times. Our next challenge as a society in the 21st century is not developing the cure for cancer or discovering new secrets about the history of human evolution, but rather it is developing a system that will support and ensure the protection of all people involved in this groundbreaking journey for the decades to come.

 

References

  1. https://www.nature.com/articles/s41576-019-0122-6
  2. https://www.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/
  3. https://en.chessbase.com/post/kasparov-on-the-future-of-artificial-intelligence
  4. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.278.5274&rep=rep1&type=pdf#page=41
  5. https://www.nature.com/articles/s41746-019-0191-0
  6. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6373233/
  7. https://www.genengnews.com/insights/looking-ahead-to-2030/
  8. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5089703/
  9. https://www.genome.gov/about-genomics/policy-issues/Privacy

Frontiers in Animal Behavior Research: Scientific Application of Krogh’s Principle

By Kaiming Tan, Neurobiology, Physiology, and Behavior, ‘16

Author’s Note: As a student who is engaged in biological sciences research, I often read research publications and perform experiments in laboratory classes and research projects. A common theme across these studies is that different labs use various model organisms. For instance, labs that research infectious human diseases tend to use primates because of their similarity to humans, whereas geneticists tend to use fruit flies because of their clearly defined and inherited traits. I often wondered what drives the selection of specific model organisms and whether there is any scientific justification behind it. This manuscript introduces Krogh’s principle, a principle which is commonly applied in in vivo research studies to aid in determining the appropriate model organism. Additionally, a brief research proposal is presented to demonstrate Krogh’s principle on a practical level.

 

Key Words

Krogh’s principle, animal behavior, sensory ecology, Leach’s storm-petrel, bladder grasshopper, research proposal

 

Introduction

Krogh’s principle is the gold standard used to choose experimental models in the fields of animal behavior and sensory ecology. The principle states that for every research question, there is a preferred model organism to study, which allows researchers to produce experimental results that help to answer the research question. These preferred organisms often have one or more specialized traits that are particularly well-suited for a researcher’s objectives [1]. August Krogh developed this principle in the mid 19th century, and ever since, scientists worldwide have adapted this principle when selecting model organisms for their research studies.

 

Krogh’s Principle in Use

Dr. Robyn Hudson is a world-renowned scientist and a leading expert on the effects of chemical olfactory cues on animal behavior. Dr. Hudson applied Krogh’s principle in her research on olfactory learning and development by choosing the European rabbit as a model for her research. Rabbit pups are born blind, but they have a fully-developed sense of smell, also known as olfaction. Additionally, baby pups are fed in the dark conditions of their underground nests. They are also altricial, meaning they are entirely dependent on their mother’s brief nursing. In Hudson’s experiment, the rabbit pups were only visited by their mother once a day to be nursed for about three to four minutes [2]. Thus, the pups require olfaction to look for their mother’s nipples for milk in order to survive. Given the natural history of this species, scientists can conclude that the baby rabbit’s ability to search for their mother’s nipples is primarily due to olfaction. 

Additionally, Krogh’s principle applies to Dr. Hudson’s choice of research subjects because rabbits are easy to rear and observe in a laboratory setting. European rabbit pups have evolved plastic mechanisms calibrated by circumstantial odor experience in preceding and current environments. Olfactory plasticity allows the rabbits to modify their behavior in response to olfactory cues such as different scents. This enables the rabbits to learn behaviors evoked by odors and makes the behavior easy to measure and manipulate by researchers [2,3]. As a result, scientists can measure the rabbits’ behavioral ecology with respect to foraging, which could make the European rabbits a preferred model for olfactory learning. These rabbits are perfectly suited for this type of study because they are born with an innate sense of olfaction, allowing Dr. Hudson and her research team an excellent opportunity for a study aimed at olfactory learning and development.

Another illustrative example of Krogh’s principle in use comes from the work of Dr. Brian Hoover. Dr. Hoover’s research interests included the role of olfaction as it pertained to determining mating preferences in avian species. Dr. Hoover investigated mating patterns in Leach’s storm-petrel (Oceanodroma leucorhoa) to explore the chemical basis of mate choice through avian olfaction. There are several reasons he chose the Leach’s storm-petrel as the model organism for this study. The Leach’s storm-petrel has among the largest olfactory bulbs of any bird, thus they have an excellent sense of smell [4]. Olfaction is critical for the Leach’s storm-petrel to locate prey [5]. In addition, the Leach’s storm-petrels are genetically monogamous and produce only one chick per year. This scenario allows the offspring to have higher genetic quality. An organism’s genotype must be best-fitted for its survival. As the Leach’s storm-petrel only gives one offspring per year, its genetic makeup needs to be suitable to sustain it in its environment so that its survival rate is high. Thus, scientists can collect data about offspring quality and observe the adults’ mating patterns, thereby simplifying the data collection process while maintaining the accuracy of the mating patterns measured. 

The population of Leach’s storm-petrel was abundant in the study, which equated to a large and accessible sample size. Sample size is an important consideration for data analysis as it is the determinant of statistical power, the ability to report findings with statistical confidence. There are other species that could have been used in this study, including the mallards (Anas platyrhynchos) due to their large olfactory bulbs. The mallard’s reproductive behavior is also driven by olfactory cues [6]. However, the mallards are not an ideal model organism compared to the Leach’s storm-petrels in this study because the mallards are polygamous. Therefore, the mallards would not show clear mating preferences compared to the monogamous Leach’s storm-petrel [5,7]. Both Dr. Hudson and Dr. Hoover utilized Krogh’s principle when designing their respective research studies. Applying Krogh’s principle allows for the intersection of practical study methodology, high quality data, and conclusions that are generalizable beyond the species studied. 

 

Application of Krogh’s Principle: An Experimental Proposal on the Effect of Noise Pollution on Insect Communication    

Now that we have examined historical uses, both recent and distant, of Krogh’s principle, we will now examine an application of Krogh’s principle for future research. In light of how useful Krogh’s principle is, it makes sense to propose an additional study on the auditory interference of grasshoppers. This research proposal will explore whether artificial noise in the environment affects insect hearing or communication. Insect perception of a sound is masked by environmental noise pollution. Based on Krogh’s principle, Bullacris membracioides (the bladder grasshopper) will be the model organism for this study due to their anatomical and behavioral advantages. Male and female bladder grasshoppers (Bullacris membracioides) call each other during mating using the duet behavior. The duet behavior occurs when a male grasshopper produces a song that is then repeated by a receptive female. Therefore, perception of the male call by the female can be measured by the female’s reply [8]. Anatomically, female bladder grasshoppers possess a sensitive auditory system of six pairs of ears (A1-A6). The A1 auditory organ contains 2,000 sensilla, which allows them to hear sounds over great distances of up to two kilometers. This is in contrast to other species of grasshoppers (i.e. Achurum carinatum) where females can only hear male calls only within 1-2 meters [9].

Bladder grasshopper female calls range in frequency from 1.5-3.2 kHz [8]. A common habitat for the bladder grasshopper is on the roadside. Road noises from the motorcycles can be loud (110 dB) and within the frequency range (700 Hz to 1.5 kHz) that could interfere with the auditory system of grasshoppers. Bladder grasshoppers typically mate in daytime, which is the same time as peak traffic noises. As a result, common noise pollution may disrupt perception of grasshopper calls and interfere with mating behavior [10] . In addition, it has been shown that grasshoppers exhibit phonotaxi (an organism’s movement in response to sound) in laboratory conditions [8-11]. To test whether artificial noise can disrupt the duetting behavior of the grasshoppers, a female grasshopper will be placed in a glass aquarium in front of an omnidirectional speaker that plays a recording of a male’s song mounted on a parabolic disk. Testing will be done at distance levels such as 100, 200, 500, 1,000, and 2,000 meters to examine whether distance correlates to the amount of time the female takes to respond. The experiment will be repeated with male calls with traffic noise, traffic noise alone, and no sound at all. Studying the effects of noise pollution on the auditory system using the bladder grasshopper is an example of Krogh’s principle because they are easy to raise and rear in a laboratory setting. Bladder grasshopper’s advantageous hearing made them the model organism of choice in relation to Krogh’s principle, making the hearing behavior more practical to measure and manipulate in response to different noise levels. 

 

Conclusion

Krogh’s principle is an important concept to keep in mind while designing research studies. Many scientists, including Dr. Hudson and Dr. Hoover, around the world have applied this principle in their research to better answer research questions. To further elucidate the utility of Krogh’s principle, an experimental proposal was made concerning the effects of noise pollution on insect communication. The anatomical and behavioral characteristics considered in selecting the bladder grasshopper for this experiment were illustrated. Krogh’s principle provides useful guidance for scientists to select the most representative and practical model organism to study. 

 

Acknowledgments

The author would like to thank Dr. Gabrielle Nevitt (Professor of the Department of Neurobiology, Physiology and Behavior at University of California, Davis) for supporting this research project and providing feedback on early versions of this manuscript.

 

Editor’s Note: A previous version of this article was published on April 18, 2020. The article was updated on June 19, 2020 to correct citation style.

 

References

  1. Lindstedt, S. (2014). Krogh 1929 or ‘the Krogh principle.’ The Journal of Experimental Biology, 217 (Pt 10), 1640-1.
  2. Kindermann, U., Gervais, R., & Hudson, R. (1991). Rapid odor conditioning in newborn rabbits: Amnesic effect of hypothermia. Physiology & Behavior, 50(2), 457–460. 
  3. Schaal, B., Coureaud, G., Doucet, S., Allam, M. D.-E., Moncomble, A.-S., Montigny, D.,  et al.(2009). Mammary olfactory signalisation in females and odor processing in neonates: Ways evolved by rabbits and humans. Behavioural Brain Research, 200(2), 346–358. 
  4. Hoover, B., Alcaide, M., Jennings, S., Sin, S., Edwards, S., & Nevitt, G. (2018). Ecology can inform genetics: Disassortative mating contributes to MHC polymorphism in Leach’s storm‐petrels (Oceanodroma leucorhoa). Molecular Ecology, 27(16), 3371-3385.
  5. Nevitt, G., & Haberman, K. (2003). Behavioral attraction of Leach’s storm-petrels (Oceanodroma leucorhoa) to dimethyl sulfide. The Journal of Experimental Biology, 206 (Pt 9), 1497-501.
  6. Corfield, J. R., Price, K., Iwaniuk, A. N., Gutierrez-Ibañez, C., Birkhead, T., & Wylie, D. R. (2015). Diversity in olfactory bulb size in birds reflects allometry, ecology, and phylogeny. Frontiers in Neuroanatomy9, 102. 
  7. Doherty, P., Nichols, J., Tautin, J., Voelzer, J., Smith, G., Benning, D., et al. (2002). Sources of variation in breeding-ground fidelity of mallards (Anas platyrhynchos). Behavioral Ecology, 13(4), 543-550.
  8. Hedwig, B. (2014). Insect Hearing and Acoustic Communication.
  9. Van Staaden, M., & Römer, H. (1997). Sexual signalling in bladder grasshoppers: Tactical design for maximizing calling range. The Journal of Experimental Biology, 200 (Pt 20), 2597-608.
  10. Chepesiuk R. (2005). Decibel Hell: The Effects of Living in a Noisy World. Environmental Health Perspectives. 113(1): A34-A41.
  11. Drosopoulos, S., & Claridge, M. (2006). Insect Sounds and Communication: Physiology, Behaviour, Ecology, and Evolution.

 

Cerebral Palsy: More Than a Neurological Condition

By Anjali Borad, Psychology ‘21  

Author’s Note: This paper explores the dynamic relationship between a mother and her son and the complexity of a health condition that the son has. I will look at a specific case of cerebral palsy—my brother—and talk about how his condition came to be and the likely prognosis. I want to delve into the details of how family dynamics play a very important role in the caregiving and caretaking that goes along with having a disabled family member and how that is seen in the relationship between my brother and my mother.

 

I see two different perspectives of my brother, Sam, and his condition, cerebral palsy: one through his eyes and the other through the eyes of my mother, his caregiver. Observing how my mother has taken care of Sam from the beginning, I began to realize that it takes a lot to be a caregiver and that she plays a significant role in his life. In order to gain more insight into her practices of giving care, I interviewed her. I started off by asking her what it means to be a caregiver and what “care” means to her. She took a deep breath in and expressed her daily routines as a caregiver. “Waking up in the morning, the first thing that you have to do is to attend to him and care for him before yourself,” she said. “You know that from brushing his teeth to giving him a shower and feeding him, we have to do everything from A to Z.”[1] A day in the life of my mother starts and ends with my brother, from getting him out of bed to providing him with basic needs like food and water. She even takes care of specific requests that pertain to his own interests, such as wearing a watch every day and having matching socks and pants. 

Cerebral palsy is a neurological disorder. Most cases of cerebral palsy occur under hypoxic conditions during the birthing process. This lack of oxygen to the brain can cause developmental delays and lifelong debilitating conditions [2]. My family and I have experienced the difficulties and limitations that accompany this disease first hand. My brother’s condition of cerebral palsy is in its most extreme form: he has quadriplegia and spasticity. A telltale sign of quadriplegic cerebral palsy is the inability to voluntarily control and use the extremities. Spasticity occurs due to a lesion in the upper motor neuron, located in the brain and spinal cord. It interferes with the signals that your muscles need to move and manifests in the body by increasing muscle tone and making the muscles unusually tight [3]. Dr. Neil Lava, a member of the National Multiple Sclerosis Society and American Academy of Neurology, describes the pathophysiology of a lack of muscular activity. “When your muscles don’t move for a long time, they become weak and stiff,” Lava writes [4]. This physical restraint is evident in my brother’s case because he has been wheelchair-bound since the age of seven. 

Upper motor neuron lesions can worsen over time. Major prolonged symptoms include over-responsive reflexes, weakness in the extensor muscles, and slow movement of the body, all of which affect the sensation of balance and coordination. For this kind of condition, occupational and physical therapy can alleviate some of the symptomatic stresses. In the case of therapy, performing the right kind of stretches can help to relax some muscle stiffness. Medication and certain surgical procedures can also treat upper motor neuron symptoms. Some common muscle relaxants prescribed to patients are Zanaflex, Klonopin and Baclofen [5]. 

 

At the neurochemical level, “Spasticity results from an inadequate release of gamma-aminobutyric acid (GABA), an inhibitory neurotransmitter in the central nervous system,” according to Mohammed Jan [6]. In a normal neural cell, when GABA interacts with and binds to GABA receptors on the postsynaptic neuron, it decreases the likelihood that the postsynaptic neuron will fire an action potential because of the inhibitory nature of GABA receptors. For a condition like cerebral palsy in which some form of insult or damage has occurred to the brain, especially in the upper motor neurons, the result is hyperactive reflexes (as opposed to the calming sensation). At this point, muscle spasticity begins to become detectable. [6] 

Apart from the biological mechanism underlying this condition, particularly significant environmental factors also gravely contributed to Sam’s condition. For eight months, my mother was carrying a normally developing fetus in the womb. However, a few weeks before Sam’s delivery date, my mother could not feel any fetal movement for about three days, so she went to check on it. As she was receiving the examination, Sam unexpectedly made a fluttering movement. If he had not moved, my mother would have had to get a Caesarean section immediately. Since he moved inside, the examiners found nothing to be wrong with the fetus and deemed it safe to send my mother home. 

Just a few days later, my mom was rushed to the hospital after her labor pain started. At the time, the primary medical staff mostly consisted of medical interns still honing in on decision-making skills during critical situations. From the time of my mother’s arrival at the hospital, 13 hours passed before a team of doctors and interns determined that she needed a C-section. Once this decision was made, additional time was required by the medical professionals to prepare the room for surgery. All the while, my mom was in labor pain and, unbeknownst to her, the fetus had become separated from the placenta while the umbilical cord suffered damage. By the time the surgery finally ended, Sam had suffered from asphyxiation. According to the Cerebral Palsy Guide, “asphyxiation that occurs during labor or delivery may have been caused by medical malpractice or neglect. Early detachment of the placenta, a ruptured uterus during birth or the umbilical cord getting pinched in a way that restricts blood flow can cause oxygen deprivation.” [7] 

The most important aspect of a disease, once established and diagnosed, is treatment and therapy. Asking questions about how to manage pain, how to make daily routines easier to perform, and how to accommodate family members in raising a child with a disability all goes into the planning process of treatment as well. These individuals need more than mere pills in order to get through their daily lives. This is where therapists (i.e. occupational, behavioral, speech, physical, and vocational therapists) and related health advocates, including family members, come into play. While therapists cannot completely remove the condition, they provide a strategy to alleviate psychological symptoms, including feelings of loneliness, fear of who will care for you, or resentment towards oneself. Family psychologists can help children with cerebral palsy by providing an initial assessment in an attempt to gain more insight into the family dynamics. If there seems to be a lack of parental support or lack of child attachment, a family psychologist can address this through therapy sessions with the parents. Therapy sessions allow for parents to individually discuss what they think is working well for the child and other areas that can be improved. The parents are also free to talk about their own personal issues, permitting the psychologist to gain a better understanding of certain triggers for the parents. These triggers can affect caregiving for the child with special needs. 

Cerebral palsy is more than just a neurological condition. It is a way of life that, for Sam, is entangled in a web of personal, social, familial, caregiver and medical challenges. One noteworthy  concept heavily emphasized in the healthcare field today is the importance of a family-centered management model. The notion of a family-centered approach strives to improve the way of life for individuals with the condition in the family in a mutual way. For a family, it can be quite taxing physically and emotionally to have to take care of someone for the rest of their lives. While it is considerably easier for the receiver to reap the benefits of the caregiver, it is more difficult for the caregiver to constantly provide. The family-centered approach tries to find a middle ground where the caregiver or family and the care-receiver are benefitting from each other as much as possible. In a holistic family-centered model, the needs of each family member are taken into consideration. 

A study by Susanne King details the role of pediatric neurologists, therapists, and family members, especially parents, in caring for children with cerebral palsy. This study mainly emphasizes the limiting restraints cerebral palsy places on individuals. For example, families with special needs children often have specific ways of communicating, specialized equipment used at home, and a support system consisting of the family members, therapists, and guidance counselors. The heavy emphasis on familial involvement with medical guidance from professionals is the root of family-centered care. King describes that “these children often have complex long-term needs that are best addressed by a family-centered service delivery model.”[8] Oftentimes, we see that those families who have disabled family members are suffering. Some parents, for example, experience great distress because they do not completely understand what is happening to their child and, thus, fail to acknowledge their limitations at times. Others feel that they are incapable of looking after their child but cannot bear the idea of sending them away to an institution. 

King also discusses the lack of investigation of families as a whole practicing care-giving. “Although there is much evidence supporting a family-centered approach in the area of parental outcomes, there has been little work reported on the family unit as a whole,” King writes. “The most common outcome is better psychological well-being for mothers (because they generally were the participants in most of the studies).”[8]  

In my family, I can actively see family-centered management of my brother’s condition occurring. I see how both my parents have certain roles in my brother’s life that collectively enable or mobilize him to feel included and respected. I like to call my parents the arms and legs for my brother in a figurative sense, and I like to call myself the eyes for my brother. Working together to the best of our ability, we enable him to see the outside world in a way that’s similar to the way we experience it. 

All my life, I have seen my mother perform the role of a caregiver. I have seen so many ups and downs in her situation, and I would always ask myself the following questions. What makes her get up every morning and continue to give the care she does? What makes her not give up? She told me, “I have faith in God, and I know that He creates pathways for me to deal with the physical implications of taking care of a disabled family member and see, I have never had any major problems with your brother. I will continue to give care for as long as my body will allow for me to do so.”[1] Annemarie Mol and John Law of Duke University collaboratively published a research paper detailing how people are more than just the definitions of their disorders or conditions. According to Mol and Law, people actively create and construct their life in a way that either enhances or minimizes the intensity of their conditions. Mol and Law also explain that “there are boundaries around the body we do…so long as it does not disintegrate, the body-we-do hangs together. It is full of tensions, however.”[9] Their conclusion on what makes a person pull through encapsulates the reason my mother still continues to care for my brother.  

The definition of cerebral palsy as a condition is very limited. Oftentimes people who have debilitating conditions are missing a network or a support system of people, that once established, can essentially improve that family member’s way of life. With the family-centered approach to managing care, one is essentially enabling the disabled family member by actively being a part of their life, including their day-to-day life activities. For example, through the support system we provide for Sam, he can feel that he is in good hands and that he has established emotional and personal security. Although his condition is permanent, it is comforting to know that our family dynamics allow for an environment in which he can thrive while remaining mentally healthy.

 

References

  1. Borad, Geeta. “Practices of Care, Interviews.” 8 Dec. 2018.
  2. Debello, William, and Lauren Liets. “Motor Systems.” Lecture, NPB 101, Davis, CA, 20 Jan. 2020. 
  3. Emos MC, Rosner J. Neuroanatomy, Upper Motor Nerve Signs. [Updated 9 Apr. 2020]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2020 January. https://www.ncbi.nlm.nih.gov/ books/NBK541082/ 
  4. Lava, Neil. “Upper Motor Neuron Lesions: What They Are, Treatment.” WebMD, 11 May 2018, https://www.webmd.com/multiple-sclerosis/upper-motor-neuron-lesions-overview#1.
  5. Chang, Eric et al. (2013). “A Review of Spasticity Treatments: Pharmacological and Interventional Approaches.” Critical reviews in physical and rehabilitation medicine vol. 25, 1-2: 11-22. 
  6. Jan, Mohammed M S. (2006). “Cerebral palsy: comprehensive review and update.” Annals of Saudi Medicine. vol. 26, 2. 
  7. Cerebral Palsy Guide. “Causes of Cerebral Palsy – What Causes CP.” Cerebral Palsy Guide. 21 Jan. 2017, https://www.cerebralpalsyguide.com/cerebral-palsy/causes/
  8. King S, Teplicky R, King G, Rosenbaum P. (2004). “Family-centered service for children with cerebral palsy and the families: a review of the literature.” Semin Pediatr Neurol. Science Direct. 
  9. Mol, A & Law, J. (2004). “Embodied Action, Enacted Bodies: the Example of Hypoglycaemia.” Body & Society, 43–62.