Home » Articles posted by wusarah (Page 6)

Author Archives: wusarah

Want to Get Involved In Research?

[su_heading size="15" margin="0"]The BioInnovation Group is an undergraduate-run research organization aimed at increasing undergraduate access to research opportunities. We have many programs ranging from research project teams to skills training (BIG-RT) and Journal Club.

If you are an undergraduate interested in gaining research experience and skills training, check out our website (https://bigucd.com/) to see what programs and opportunities we have to offer. In order to stay up to date on our events and offerings, you can sign up for our newsletter. We look forward to having you join us![/su_heading]

Newest Posts

The Universal Solvent

By Elaina Covey, Biochemistry & Molecular Biology ‘22

This is a digital painting I drew that was inspired by the importance of clean water on our planet. I painted this with my iPad Pro using the latest version of Procreate. The girl, who is the subject of this drawing, is meant to represent life on Earth. I stuck to a color palette consisting primarily of greens and blues to reflect nature and the planet. I was also inspired by a quote from American educator, Loren Eisely, who wrote in an essay titled “The Flow of the River” that “if there is magic on this planet, it is contained in water.” For this reason, I wanted to make a piece that evokes a feeling of magic and wonder. However, there is also a sense of danger. The girl, who is nearly drowning, serves to remind us that pollution in the form of oil runoffs, plastics, agricultural waste, and acidification threatens our oceans daily. Additionally, as carbon pollution increases and annual temperatures rise, sea levels are rising as well. The girl’s drowning may also serve to remind the viewer of the danger faced by many species who rely on our planet’s ice caps for survival. I hope this piece can inspire others to recognize the importance of protecting Earth’s amazing biodiversity. 

Varying Efficacy and Safety Among Food Allergy Immunotherapy Methods

By Karishma Sira, Biological Sciences ‘21

Author’s Note: This review was originally written for my UWP104F class in Winter Quarter 2021. While environmental allergies are well known to the public, many people are unaware of the social, mental, financial, and most importantly, physical costs of food allergies. I highly benefited from getting treated for food allergies through immunotherapy, so I want to make these methods more known. I want to raise awareness on the available non-avoidant treatments catered to food allergy sufferers and inform readers that these methods are important developments happening in the world of food allergy immunotherapy. This article will also explain the basic mechanisms of immunotherapy, the differences between each delivery method, the relative effectiveness of these methods, and the risks and benefits of each method. These factors should all be considered when recommending a specific method to an allergic individual.  

 

Food allergies are becoming an increasingly common global health crisis. The various consequences of living with food allergies reduces the quality of life for those affected [1]. Aside from the immediate dangers of severe allergic reactions, there is a significant amount of social restrictions and anxiety involved. Dealing with food allergies costs American individuals and families 25 billion dollars annually [2]. Avoidance diets are the most common way to treat food allergies, but they are statistically unsustainable: 75% of peanut-allergic children get accidentally exposed to peanuts by the time they are 5 years old [1]. As a result, allergy immunotherapy is an important developing preventative treatment that can allow individuals to consume allergens to improve quality of life. 

There are three main emerging treatments: oral immunotherapy (OIT), sublingual immunotherapy (SLIT), and epicutaneous immunotherapy (EPIT). All three use different delivery methods to introduce the patient to allergens to achieve desensitization. The different delivery methods may contribute to the different levels of success observed between them.

The guiding principle of food allergy immunotherapy, regardless of delivery method, is to induce a state of prolonged desensitization–defined as an increase in tolerance threshold–to an allergen [3]. This may be achieved by maintaining consumption of allergen over time through doses tailored to the patient’s observed tolerance threshold. Tolerance thresholds are determined with food challenges, where the patient consumes allergens until they experience notable allergic symptoms [4]. Desensitization may be gradually achieved through increases in dosage [3, 4]. Allergen doses slowly increase over time as the patient’s tolerance increases. Administering the allergen this way is thought to familiarize the body with it so that the immune response to the allergen gradually becomes less severe over time [5]. 

Generally, the immune response to allergens is mediated by allergy-specific antibodies called Immunoglobulin E (IgE). Once a food allergen has been ingested and detected by the immune system, IgE activates immune cells that cause inflammation and other allergy symptoms. Immunotherapy attempts to change the immune response so that allergens stimulate non-allergy specific antibodies like Immunoglobulin G (IgG) [5]. IgG antibodies produce a normal immune response to foreign bodies like infections and viruses. Training the body to respond with IgG prevents the allergic response, eliminating adverse allergic symptoms. 

Immunotherapy aims to create a state of long term desensitization known as sustained unresponsiveness (SU). By achieving SU, patients are more likely to retain tolerance even after they stop taking the regular, repeated doses of allergen. Patients with SU can often freely be in the presence of their allergens or even consume them [3, 4]. SU is not considered a “cure” of allergies. Immunotherapy simply aims to change the Immunoglobulin E-mediated allergic response to a less drastic response that has little to no effect on quality of life [1]. SU is less commonly achieved than desensitization across all delivery methods, with only a small subset of patients reaching SU after years of therapy [6]. Nonetheless, SU remains the ideal end goal for all patients [3]. 

 

Delivery Methods

Across all studies cited in this literature review, delivery methods vary in efficacy depending on the food allergen being treated. Discussing the efficacy of each method for individual food allergens would thus require extensive examination and comparison of many individual studies. This level of specificity is not necessary to explain or compare the efficacies of the three immunotherapies. The duration and safety of each treatment seems to widely vary based on the particular allergic response, tolerance threshold, and specific allergens of an individual. Despite these differences, however, much of the research yields consistent results in the overall relative efficacy of each method. As such, this review will describe a general consensus about the effectiveness of each delivery method across many studies. 

 

Delivery Method #1 – Oral Immunotherapy

The first food allergy immunotherapy delivery method, which has recently received Food and Drug Administration (FDA) approval for peanut allergen [7] is oral immunotherapy (OIT). In OIT, the patient ingests allergen protein often in powder form and mixed with other non-allergenic food [5].

OIT has yielded the most promising clinical results out of all immunotherapy delivery methods [5]. Most patients treated with OIT have reached desensitization, though SU is less commonly observed [3]. Adverse allergic reactions are reasonably likely to occur during OIT, though most reactions are mild. All reactions can be promptly addressed within a clinical or hospital setting. Despite this, individuals with severe and fast-acting allergic reactions (e.g. anaphylaxis) may still face risks to their physical well-being [3, 6]. As of now, only an OIT treatment, known as Palforzia, for peanut allergen has been approved by the FDA out of all potential immunotherapies. At this point in time, it has passed clinical trials and requires additional risk assessments, education, and patient counseling for use [7]. 

Adjuvant medications–used in combination with a treatment to enhance or modify its effects are being examined as additional safety measures to make OIT safer. Omalizumab is a monoclonal antibody, an antibody cloned from existing antibodies that can be taken as medicine to assist immune functions. Omalizumab selectively binds to IgE, which occupies IgE enough to suppress the allergic response [8]. Omalizumab appears to have no bearing on the effectiveness of the desensitization process [3]. However, it has been shown to speed up the process and decrease incidence of adverse allergic reactions.  For common allergens like milk and peanut, little to no adverse reactions were observed when Omalizumab was administered to subjects [8]. However, further research and clinical trials with larger sample sizes and a wider array of allergens must be conducted before Omalizumab can be universally used as a safety protocol for food allergy immunotherapy [8].

 

Delivery Method #2 – Sublingual Immunotherapy

The second delivery method is sublingual immunotherapy (SLIT). SLIT requires that liquid or dissolvable extracts of allergens be regularly administered under the tongue, held there for a time, and then swallowed [5]. Using this method, the allergen can be mainly taken into the body by way of antigen presenting cells in the sublingual mucosa found under the tongue. This route avoids enzymes encountered during gastric digestion that might change the structure of the allergen protein. This is useful in ensuring that the immune system becomes fully desensitized to the correct allergen [6]. 

One advantage of SLIT is its safety; adverse allergic reactions and anaphylaxis are not commonly observed [5, 6]. Additionally, using SLIT before OIT is highlighted as a potential benefit. Patients who experience adverse reactions with OIT generally are advised to use SLIT as a stepping stone treatment. This lets them build enough desensitization to make OIT a more viable option, as they experience less side effects [5].

 

Delivery Method #3 – Epicutaneous Immunotherapy

The third delivery method in food allergy immunotherapy is epicutaneous immunotherapy (EPIT). Immune cells in the skin called Langerhans cells help introduce the allergen to the body when dermal patches are applied to the skin [5, 9]. Patches are kept on for increasingly longer durations and replaced as instructed by a physician until the patient is mostly unresponsive to the allergen. At this point, patches must still be worn to maintain results, but need only be replaced every 24 hours [2]. 

Using this route to absorb allergens successfully prevents entry to vasculature, which is thought to limit severe systemic allergic reactions and only results in mild, cutaneous reactions [1, 9]. Similar to SLIT, this makes EPIT’s safety profile better than OIT’s. Additionally, EPIT does not place restrictions on the patient’s lifestyle and does not require close clinical observation like OIT or SLIT [2]. 

 

Comparing Delivery Methods

As mentioned earlier, OIT is largely considered the most effective of the three immunotherapies described. Most patients are successfully desensitized and SU, though still infrequent, it occurs more often than other methods [3, 5]. 

SLIT has shown modest levels of desensitization, but is overall considered less effective than OIT, showing less immunologic changes over time [6]. It does not appear to confer high levels of SU [5]. It is unknown whether this is attributed to the fact that most patients appear to struggle with completing the recommended duration of treatment [9]. 

EPIT also demonstrates levels of desensitization comparable to SLIT, with 28-50% of patients showing tolerance to their allergen on average [1, 2]. SU has not been well documented in either EPIT or SLIT [1], which seems to be the main reason why they do not have FDA approval [2]. 

 

Conclusions

Preventative food allergy immunotherapy has been a developing area of study due to a global increase in food allergy incidence [5]. Three prominent immunotherapy delivery methods have emerged with differing efficacies and safety profiles.

OIT is widely considered the most clinically efficient and promising delivery method, since it consistently produces desensitization [5]. SLIT shows less consistent desensitization [6] and maintaining treatment is difficult for patients. EPIT shows similar results to SLIT [9]. While SU is not commonly achieved, it is more common in OIT [1, 3], which may explain why the only FDA-approved food allergy immunotherapy is OIT for peanut allergen [7]. 

The safety and convenience of each method may also affect patient choice. OIT may be the most effective and quick-acting, but it also runs the largest risk of adverse reactions, which warrants close clinical attention during treatment [3, 6]. In contrast, SLIT does not seem to cause many adverse reactions and is encouraged as a stepping stone treatment for patients that would like to move on to OIT once more tolerance to their allergen is built up. This practice seems to make OIT much safer [5] along with the use of medications like Omalizulab [8]. EPIT is also safer than OIT but has the added advantage of being a convenient and low maintenance treatment [5, 9]. At maintenance, dermal patches used for EPIT only need to be replaced every 24 hours, no clinical observation is required, and there are no restrictions placed on the patient’s lifestyle [2]. 

As allergies become more common across the globe, more children struggle to adhere to avoidance diets and become vulnerable to accidental exposure to allergens [1]. Immunotherapy methods have developed in the hopes of increasing the quality of life of these food allergic individuals [1]. Future research may be able to improve on the observed effects and safety of immunotherapy. Ultimately, any progress will be able to help food allergy sufferers improve their quality of life. 

 

References:

  1. Costa, C., Coimbra, A., Vítor, A., Aguiar, R., Ferreira, A. L., & Todo-Bom, A. (2020). Food allergy – From food avoidance to active treatment. Scandinavian journal of immunology, 91(1), e12824. doi:10.1111/sji.12824
  2. Kim, E. H., & Burks, A. W. (2020). Food allergy immunotherapy: Oral immunotherapy and epicutaneous immunotherapy. Allergy, 75(6), 1337–1346. doi:10.1111/all.14220
  3. Wood R. A. (2017). Oral Immunotherapy for Food Allergy. Journal of investigational allergology & clinical immunology, 27(3), 151–159. doi:10.18
  4. Marcucci, F., Isidori, C., Argentiero, A., Neglia, C., & Esposito, S. (2020). Therapeutic perspectives in food allergy. Journal of translational medicine, 18(1), 302. doi:10.1186/s12967-020-02466-x
  5. Burks, A. W., Sampson, H. A., Plaut, M., Lack, G., & Akdis, C. A. (2018). Treatment for food allergy. The Journal of allergy and clinical immunology, 141(1), 1–9. doi:10.1016/j.jaci.2017.11.004
  6. Scurlock A. M. (2018). Oral and Sublingual Immunotherapy for Treatment of IgE-Mediated Food Allergy. Clinical reviews in allergy & immunology, 55(2), 139–152. doi:10.1007/s12016-018-8677-0
  7. Caccomo, S. (2021). FDA approves first drug for treatment of peanut allergy for children. U.S. Food and Drug Administration. <https://www.fda.gov/news-events/press-announcements/fda-approves-first-drug-treatment-peanut-allergy-children>. 
  8. Dantzer, J. A., & Wood, R. A. (2018). The use of omalizumab in allergen immunotherapy. Clinical and experimental allergy : journal of the British Society for Allergy and Clinical Immunology, 48(3), 232–240. doi:10.1111/cea.13084
  9. Reisacher, W. R., & Davison, W. (2017). Immunotherapy for food allergy. Current opinion in otolaryngology & head and neck surgery, 25(3), 235–241. doi:10.1097/MOO.0000000000000353

Non-Invasive Brain Stimulation Therapies as Therapeutics for Post-Stroke Patients

By Priyanka Basu, Neurobiology, Physiology & Behavior ‘22

Author’s Note: I wrote this review article during my time in UWP102B this past quarter, though my inspiration in digging deeper into this topic came from my personal experience with my uncle who had recently incurred a stroke to his brain leading him to face its detrimental effects. I realized I wanted to investigate the possible solutions there were for him and others, allowing me to consequently further my knowledge about this field of study. I’d love for readers to understand the complexity and dynamics that non-invasive brain stimulation therapies have on post-stroke patients, and its beneficial effects when used in conjunction with other therapies. Though studies are in their preliminary phases and there are quite a bit of unknowns, it is still important to keep in mind the innumerable therapeutics being created that target patient populations experiencing a certain extent of brain damage- their results are absolutely phenomenal.

 

Abstract

Non-invasive brain stimulation therapies have become an overwhelmingly dominant innovation of biotechnology that has proven to be greatly effective for treating post-cerebral damage. Stimulation therapies use magnetic fields that can induce electric fields in the brain by administering intense electric currents that pulse through neural circuits. Although several stimulation therapies exist, the therapies discussed in this review include the most widely used therapeutic technologies: transcranial magnetic stimulation (TMS), repetitive transcranial magnetic stimulation (rTMS), transcranial direct current stimulation (tDCS), and theta-burst stimulation (TBS). Post-stroke patients often experience significant impairments to their sensorimotor systems that may include the inability to make arm or hand movements, while other impairments include memory or behavioral incapacities. Stimulatory therapies have been shown to allow for certain neuronal excitability that can improve the impairments seen in these patients unlike alternative standardized procedures. Although the individual efficacies of stimulation therapies have shown viable outcomes, current research dives into how the use of stimulation therapies in conjunction with secondary therapeutics can have synergistic effects.

 

Introduction:

Basic stimulation therapies were first put to clinical use in 1985 to investigate the workings of the human corticospinal system [1]. The magnetic field that is produced by stimulation is capable of penetrating through the scalp and neural tissue, easily activating neurons in the cortex and strengthening the electrical field of the brain [1]. By inducing depolarizing currents and action potentials in certain regions of the brain, patients with damaged areas of the cerebral cortex found great relief as they regained a degree of normal functionality in their motor, behavioral, or cognitive abilities [1]. 

In recent years, stroke has become the second leading cause of death in the United States [2]. Neurologically speaking, stroke can interrupt blood flow in regions of the brain, such as the motor cortex, weakening overall neurological function throughout the body [2]. Stimulatory therapies are used in these cases to successfully activate neurons which jumpstarts their firing capabilities and rewires the body’s normal functionality [1]. Although certain reperfusion therapies using thrombolysis have been seen to treat certain ischemic (i.e. hemorrhaged) tissue in stroke patients by removing deadly clots in blood vessels, these therapeutics are often starkly inaccessible to the general population because of their price tag and scarcity [2]. Oftentimes, even standard pharmacological drugs prove ineffective [2]. By way of heavy experimentation, scientists have discovered that the brain can simply reconstruct itself through a method called, “cortical plasticity,” allowing for neural connections to be modified back to their normal firing pattern [3]. By understanding this innate and adaptive tool that the brain possesses, researchers invented the method of stimulatory therapies to essentially boost our own neural hardware [2].

Over the years, by investigating how these therapies and their mechanisms can work in conjunction with other therapeutics on post-stroke patients, an in-depth understanding of further possible advantageous therapies can be made. 

 

Mechanism of Non-Invasive Neural Stimulation

Most current noninvasive brain stimulation therapies use similar methodologies involving the induction of magnetic fields or electrical currents along cerebral cortical regions of the skull and brain to induce rapid excitation of neurons [4]. Some of the most common noninvasive brain stimulation (NIBS) techniques currently used are transcranial magnetic stimulation (TMS), repetitive transcranial magnetic stimulation (rTMS), transcranial direct current stimulation (tDCS), and theta-burst stimulation (TBS) [4]. Our brains reorganize innately after stroke or cerebral damage through mechanisms of cortical plasticity [3]. However, non-invasive stimulation therapies can stimulate cortical plasticity by quickly modulating neural connections through electrical activation for efficient neuronal and/or motor recovery after the incident [3]. According to Takeuchi et al., TMS and other similar therapies stimulate the cortex through the scalp and the skull. This method positions a coiled wire over the scalp to generate a local magnetic field [4]. As these magnetic fields are pulsed and begin to enter the brain, they establish an electrical current that stimulates cortical neurons which induces a neuronal depolarization (i.e. excitation) [4]. rTMS involves a similar mechanism as TMS, but it has a greater rate of repetition of the ejected magnetic stimulation inducing a higher frequency current [5]. Meanwhile, TBS therapy is a modification of rTMS. While TBS has a similar degree of frequency to rTMS, TBS involves larger bursts of magnetic stimulation rather than small, frequent action [6]. 

When understanding the degree that noninvasive brain stimulation works on cortical neural plasticity, it is best to see its functionality in the motor cortexone of the most easily damaged regions of the brain in stroke patients [4]. Neuronally, NIBS can excite the damaged hemisphere allowing for an increase in activity of the opposite or ‘ipsilesional’ motor cortex [4]. This excitement is highly inducible and is required for proper motor learning and functioning in normal human behavior [7,8]. In addition, these therapeutics may also induce certain metabolic changes that stimulate our innate neural plastic network for successful post-stroke motor recovery [4]. Over time, and with continuous electric stimulation therapy, long-term potentiation of our neural hardware can lead to swift recovery of the affected hemisphere [3]. By this method of magnetic stimulation on damaged cortical regions of the brain, post-stroke patients can recover faster than ever before. 

 

Excitability of Motor and Behavioral Neural Networks 

Ultimately, NIBS treatments induce excitability of motor and behavioral neural networks that allow for the atrophy of affected cerebral regions and increase neural plasticity in the region [5]. In a study led by Delvaux et al., TMS therapies were used to excite changes in the reorganization of motor cortical areas of post-stroke patients [9]. Scientists investigated a group of 31 patients that experienced an ischemic stroke in their middle cerebral artery which led to severe hand palsy [9]. The patients were clinically assessed with the Medical Research Council, the National Institutes of Health stroke scales, and Barthel Index on certain dates of experimentation after stroke [9]. From the data collected, when damaged regions were measured by electrical motor-evoked potential (MEP) amplitudes, the areas were initially statistically smaller than the unaffected areas, thus indicating a lesser degree of motor activation resulting from the effects of certain damaged regions of the brain [9]. After the affected regions were treated with focal transcranial magnetic stimulation (fTMS), a specific type of TMS therapy, the stimulation ultimately induced excitability of affected motor regions as well as unaffected motor regions due to the inducible nature of connected regions in the brain [9]. This study evaluates a TMS technique involving MEP amplitude measurements and FDI motor maps unique to most other stimulatory therapies, including rTMS, tDCS, and others, helping to physiologically understand the impacts of neurological damage in the brain. Although the study hosted a relatively small sample size of twenty participants, it  can be considered sufficient as per the extremity of the experimental design and scarcity of possible participants. The study participants, ranging between 45 and 80 years old, were tested for any underlying neurological disorders to reduce confounding factors. By testing these participants using a standardized scaling method and MEP potentials, the study qualified as a well-regulated results-directive for a conclusive study despite a relatively small sample size.

A similar study conducted by Boggio et al. further investigated the effects of NIBS on motor and behavioral neural networks by using variant-charged (anodal (+) and cathodal (-)) current stimulations on stroke patients and then identifying enhanced results. The investigation studied a specific brain stimulatory therapeutic (tDCS) on its excitability and potential benefits on post-stroke patients [7]. Investigators were able to test the motor performance and improvements in stroke patients using two experiments [7]. Experiment 1 was conducted during four weekly sessions using sham (controlled magnetic stimulation), anodal (increased magnetic stimulation), and cathodal (decreased magnetic stimulation) transcranial direct current stimulation (tDCS) therapies [7]. In Experiment 2, five daily sessions of only cathodal tDCS treatments were investigated on affected brain regions [7]. The effects were reported following the procedure and blindly evaluated using the Jebson-Taylor Hand Function Test, a standardized test to measure gross motor hand function [7]. Between the two experiments, the most significant motor and behavioral improvements were found using the three stimulations in Experiment 1 [7]. However, when stimulations were compared individually, viable motor functional improvement was still evident with either cathodal or anodal tDCS on unaffected and affected hemispheres respectively when compared to the sham tDCS therapy [7]. Using daily sessions instead of weekly was found to be more beneficial in terms of lasting treatment results [7]. Investigators were able to conclude that their findings show strong support in relation to other tDCS research on motor function improvement in stroke patients [7]. tDCS is considered safe, representative, and inexpensive allowing for the possibility of further research on the technique with a wider range of patients. The study could have included additional evaluations of the different motor capabilities rather than just focusing on the hand itself to allow for variation, additional variables, and details that could supply the research rather than simply validating the technique. Both experiments analyzed above resulted in statistically significant results and represented the excitable capabilities of stimulatory therapies currently used for post-stroke patients. 

 

Effectivity of Alternative Neural Therapeutics in Conjunction with NIBS Therapies 

Although standard NIBS therapies have been shown to provide impressive solutions for post-stroke patients, there have been few studies understanding the prospects of using NIBS in conjunction with other therapies for these patients. Aphasia, a rapid decline of the ability to acknowledge or express speech, is a common neurological disorder often seen in post-stroke patients as a result of damage to speech and language control centers of the brain [10]. A number of therapeutics not only search for solutions to certain post-stroke motor dysfunctionalities, but also the behavioral dysfunctions of stroke including aphasias. For several years, previous studies have investigated the use of intonation-based intervention (melodic intonation therapy (MIT)), on severe non-fluent aphasia patients showing immense benefits [10]. A study conducted by Vines et al. (2011) expanded on these findings and implemented this therapy of MIT alongside an additional brain stimulatory therapy of transcranial direct current stimulation (tDCS) to understand if there are augmented benefits of MIT in patients with non-fluent aphasia [10]. Six patients with moderate to severe non-fluent aphasia underwent three days of anodal-tDCS therapy with MIT and another three days with sham-tDCS therapy with MIT [10]. The two types of treatments were separated by one week and assigned randomly [10]. The study showed that compared to the effects of the sham-tDCS with MIT therapies, the anodal-tDCS with MIT led to statistically significant improvements in the patients’ fluency of speech [10]. The study was able to solidify that the brain can properly reorganize and heal damage to its language centers through combined therapies of anodal-tDCS and MIT thus revamping the neurological activity of non-fluent aphasia patients [10]. However, one important component that was lacking in this experiment was a large number of subjects for reliable results. With six patients in the study, scientists could have increased the number tested to allow for greater sufficiency and valid results. Although this study lacked in size, it did include a range of participant ages relieving confounding effects of age-related neurological differences. 

An additional study important to the investigation of understanding the prospects of conjunctive stimulatory therapy was conducted in 2012 by Avenanti et al. The study sought to understand the possible benefits of combining non-invasive brain stimulation therapies (rTMS) with physical therapy. Many studies have investigated the effects of TMS alone on chronic stroke patients but few have investigated the combination of TMS with physical therapy.  In a double-blind, randomized, experiment, Avenanti et al. (2012) investigated a group of 30 patients who were given either real or sham transcranial magnetic stimulations (rTMS) either before or after physical therapy (PT) [5]. The outcomes of this experiment were evaluated based on dexterity and force manipulations of motor control [5]. The results of the study found that overall, patients that were given real rTMS treatments developed statistically better behavioral and neurophysiological outcomes when used in conjunction with PT but were more greatly enhanced when stimulated before physical therapy in a sequential manner [5]. Improvements were detected in all conjunctive groups (real or sham/before or after PT), and even with PT alone in certain experimental groups [5]. Researchers were able to conclude that treating chronic stroke patients with motor disabilities with rTMS before PT provided optimal results of motor excitability, though its conjunctive outcome was effective as well [5]. With statistically significant results, the study indicates valid conjunctive benefits of both PT and rTMS therapy for the patients evaluated [5]. Regarding the reliability of this study, each method was properly implemented for results to be sustained allowing for proper controls in sham trials [5].  

Conjunctive therapies offer new insight into possible avenues for advantageous treatments for post-stroke patients rather than when used alone. With new investigations in this field of study, unknown outlets are slowly being uncovered, allowing for better solutions to cerebral and ischemic damage. 

 

Conclusion:

Non-invasive brain stimulation (NIBS) therapies are a well-refined and successful therapeutic for post-stroke patients. Although much of the mainstream solutions to damaged cerebral regions are NIBS therapies, current research is still searching to identify qualifying conjunctive therapies with NIBS to ameliorate treatments. Standard stimulatory procedures use measurable magnetic or electric currents to depolarize or excite regions of the brain to stimulate neurons for proper activity. By doing so, our innate system of neural plasticity works with this stimulation to enhance the recovery of damaged cerebral regions. In recent years, scientists have taken a step further and combined stimulatory therapies with additional stroke therapy to further enhance results. Although early research processes have begun, more studies and trials are necessary to provide for sufficient data to strongly confirm their efficacies, even when promising results have already been found. Several studies lack the number of participating patients, data, and resources needed to successfully prove these conjunctive therapies. Further understanding of these treatments through repeated trials, larger sample sizes, and statistically significant results may lead to a better understanding in the future of possible effective conjunctive treatments for post-stroke patients. 

 

References:

  1. Santos MD dos, Cavenaghi VB, Mac-Kay APMG, Serafim V, Venturi A, Truong DQ, Huang Y, Boggio PS, Fregni F, Simis M. 2017. Non-invasive brain stimulation and computational models in post-stroke aphasic patients: single session of transcranial magnetic stimulation and transcranial direct current stimulation. A randomized clinical trial. Sao Paulo Medical Journal. 135(5): 475–480.
  2. Kubis N. Non-Invasive Brain Stimulation to Enhance Post-Stroke Recovery. 2016. Front Neural Circuits. 10:56.
  3. Chen R., Cohen L. G., Hallett M. 2002. Nervous system reorganization following injury. Neuroscience 111, 761–773.
  4. Takeuchi N, Izumi S. 2012. Noninvasive brain stimulation for motor recovery after stroke: mechanisms and future views. Stroke Res Treat. 584727. 
  5. Avenanti A., Coccia M., Ladavas E., Provinciali L., Ceravolo M. G. 2012. Low-frequency rTMS promotes use-dependent motor plasticity in chronic stroke: a randomized trial. Neurology 78, 256–264.
  6. van Lieshout ECC, Visser-Meily JMA, Neggers SFW, van der Worp HB, Dijkhuizen RM. 2017. Brain stimulation for arm recovery after stroke (B-STARS): protocol for a randomised controlled trial in subacute stroke patients. BMJ open. 7(8): e016566.
  7. Boggio P. S., Nunes A., Rigonatti S. P., Nitsche M. A., Pascual-Leone A., Fregni F. 2007. Repeated sessions of noninvasive brain DC stimulation is associated with motor function improvement in stroke patients. Restor Neurol Neurosci. 25, 123–129.
  8. Bucur M, Papagno C. 2018. A systematic review of noninvasive brain stimulation for post-stroke depression. Journal of affective disorders. 238: 69–78.
  9. Delvaux V., Alagona G., Gérard P., De Pasqua V., Pennisi G., Maertens de Noordhout A. 2003. Post-stroke reorganization of hand motor area: a 1-year prospective follow-up with focal transcranial magnetic stimulation. Clin. Neurophysiol. 114, 1217–1225.
  10. Vines BW, Norton AC, Schlaug G. 2011.  Non-invasive brain stimulation enhances the effects of melodic intonation therapy. Frontiers in psychology. 2:230.

Sox10 as a Focal Point for Understanding Schwann Cell Differentiation

By Carly Adamson, Neurobiology, Physiology & Behavior ‘21

Author’s Note: I wrote this literature review for a UWP 104E assignment for which we could pick any science topic that interested us. I chose neural crest cells (NCCs) because they are the research focus of Dr. Crystal Rogers’ developmental biology lab, which I intern for on campus, and have such diverse fates. When I wrote this piece, I had just started my internship, and I wanted to connect the lab’s research to my own interests in the peripheral nervous system. This review explains and connects five key discoveries within the history of NCC development research. My intention was to split my audience into both neurodevelopmental specialists and a broader group of biologists with little background in NCCs. I used the terminology necessary for this specialized analysis while also drawing main conclusions in simpler language. I relate to the reader with metaphorical and illustrative language, as I yearned for such explanations in my own exploration of this complex research topic.

 

Abstract 

Schwann cell (SC) development through neural crest cell (NCC) migration and differentiation is a fascinating and important topic since these cells are critical for nervous system function. On the journey to becoming SCs, some Schwann cell precursors (SCPs) stay in their partially-differentiated state to guide other developing cells and to provide a ready supply of a variety of NCC derivatives whenever needed in development. There is a lot left to understand in this intricate process, including how the timeline of SCP development aligns with other neurodevelopmental processes. This research review focuses on key studies about the network of transcription factors, regulators, and enzymes that take multipotent cells from a central region to the final fate of SC maturity. This review also highlights Sox10, a key transcription factor, as a central point to ground the reader in all other discoveries surrounding SC differentiation. 

 

Keywords: Schwann cells, Schwann cell precursors, Sox10, neural crest cells, neurodevelopment, glial growth factor 

 

Introduction

Neural crest cells (NCCs) are the foundation for a variety of key structures, from pigment cells of the skin to neurons and glia of the periphery. NCCs are multipotent, meaning that they can continue to divide in an undifferentiated state as well as differentiate into a wide range of mature cell types. However, they are also the starting point for many pathologies in abnormal development, including digestive tract abnormalities, motor disabilities, and cancer [1]. These fascinating cells have long been shown to differentiate based on networks of environmental signals, as shown by extensive transplantation experiments [2]. All NCCs originate from a structure aptly named the neural crest (NC), which distinguishes vertebrates from other chordates [3]. There are many gene networks to pattern the differentiation, migration, and maintenance of pluripotency of these cells. NCCs must delaminate from the neural tube and migrate to their target tissues after a process known as epithelial-mesenchymal transition (EMT) to become diverse derivatives such as craniofacial bone, pigment, or neurons in the developing organism. Some NCCs even stay multipotent after migration to establish local stem cell populations [4]. NCCs were first described by Dr. Wilhelm His over 150 years ago, and so much about the cells’ diverse functions has been uncovered since then. 

One of these diverse functions is the formation of Schwann cell precursors (SCPs). These NCC derivatives migrate along embryonic nerve fibers, supplying stem cells wherever needed in the embryo [5]. After a whirlwind of developmental signaling pathways and context-dependent regulation, mature Schwann cells (SCs) are created. SCs are a key player in the peripheral nervous system, providing insulation and structural support to nerves [6]. SCs create multilayered fatty structures called myelin sheaths that allow action potentials to quickly conduct along a nerve fiber. While a developing embryo must regulate many cell proliferation and differentiation processes at once, it is important to specifically balance SC myelination and differentiation to efficiently develop peripheral nerves that can relay information to their neighbors and listen to external signals. A failure to balance these processes can lead to motor and sensory disabilities in an individual. In addition, the number of SCs that proliferate must match the number of axons in a one-to-one relationship to properly sort cells for subsequent myelination [1]. This balance requires a hefty molecular team, each with key roles in guiding SCs to maturity.

A main player in SC development is Sox10, a protein that induces NCCs to differentiate into components of the peripheral nervous system. Sox10 is a transcription factor, meaning that it binds a specific DNA sequence to regulate the expression of other genes. Sox10 has been linked to a multitude of developmental processes, including three activities relevant to this review: the formation of the neural crest itself, the formation of the peripheral nervous system, and the complete differentiation process of SCs. Without the expression of the Sox10 gene, no glial cells can form in vivo or in vitro [6]. Since Sox10 plays such key roles in multiple stages of vertebrate development, its expression must be tightly regulated. In order to understand the path from multipotent NCCs to fully-developed SCs that keep the organism alive, one must dive into the complicated web of Sox10 control.   

 

Identification of potential SC development factors

The significance of early work in SC development studies can be sorted into two categories: 1) setting the foundation for NCC isolation techniques and 2) identifying genes for further inquiry into their potential roles in SC development. A publication by Buchstaller et al. in the Journal of Neuroscience describes the genetic methods of expressing  fluorescent proteins in mice to identify and isolate NCCs and developing SCs. It is important to obtain pure populations of NCCs to study their development and differentiation, as a clean starting point gives the most accurate results once specific induction factors are applied. These broad methods, along with the genetic protocols of RNA amplification and in situ hybridization, allowed later researchers to study many different NCC lineages. A notable candidate selected from this study is Oct6, a transcription factor that will be further discussed below for its role in SC development [7]. The work of Buchstaller et al. provided the foundation for further investigations into the roles of individual transcription factors and signaling proteins in SC differentiation.

 

Neuregulin-1/ErbB signaling

A key discovery earlier in neural crest experimentation was a particular environmental factor, glial growth factor (GGF), also known as neuregulin-1, that prevents rat NCCs from differentiating into neuronal cells and instructs them to instead differentiate into glial components, such as SCs. This environmental factor works to inhibit Mash1, a very early and essential marker for neuronal differentiation. By blocking Mash1, neuregulin-1 prohibits developing cells from ever starting down the path to neuronal maturity, suppressing the fate of neurons entirely. This result, along with rigorous experiments designed to replicate this finding, identified GGF as the first factor shown to both promote one NCC fate and suppress another [2]. This study changed the theory of SC differentiation by confirming neuregulin-1 as a key SC regulator and proposing the mechanism behind it. 

Later research further investigated the role of neuregulin-1 in SC development by focusing on the protein’s interplay with Sox10. A 2020 study from Yang et al. found that the signaling pathway involving neuregulin-1 and its family of epidermal growth factor receptors, ErbB type, maintains expression of Sox10 in differentiating SCPs [6]. By uncoupling ErbB signaling from SC differentiation via two experimental groups, this study was able to show that the combination of ErbB2 and neuregulin are required to produce SC phenotypes and that neuregulin works by affecting Sox10 expression. 

 

Oct6’s synergy with Sox10

Oct6 is a transcription factor that works synergistically with Sox10 to promote the myelination of SCs [6].  Jagalur et al. used cell culture and cloning methods in rat models to elucidate the role of Oct6 in connecting the regulation networks of the promyelinating SC, which has been paired with a neuron but lacks a complete myelin sheath, to the SC that actively ensheaths axons. The 2011 paper concluded through comparative genome studies that Sox10 proteins pair up to form structures called dimers and bind the Oct6 gene. This interaction creates a greater regulatory complex and allows developing SC populations to respond to environmental cues [8]. This study defines a key regulatory mechanism for timing the onset of SC myelination, which is highly important to neurodevelopment in an individual and acutely affects their prognosis. This understanding of Oct6 provides another piece to the puzzle of SC development: differentiating NCCs must be able to understand cues from pre-existing, mature cells. 

 

Histone deacetylases modify multiple SC factors

Figure 1: Bottom left image shows that the inhibition of HDAC1/2 yields lower Pax3 expression in JoMa1 cells. Image credit: Jacob et al., doi:10.1523/jneurosci.5212-13.2014.

Histone deacetylases are transcriptional regulators that remove acetyl groups from DNA histones to condense chromatin and to decrease DNA interactions with transcription factors. These enzymes can also remove acetyl groups from transcription factors themselves to modulate their activity. These enzymatic activities tie into SC development as demonstrated by Jacob et al.’s discovery in 2014 on the functions of histone deacetylase 1 and 2 (HDAC1/2). This study explains that HDAC1/2 are necessary for the myelination and complete maturation of SCs. These enzymes can be placed into the web of protein interactions that together regulate SC differentiation and myelination. This study used a combination of mouse neural crest explants and colonies, which differentiate into glial components in the presence of neuregulin, from the NCC-derived lineage called JoMa1 to get a more complete picture of transcriptional regulation. Jacob et al. show that HDAC1/2 unwind the tightly-packed DNA of the Pax3 gene to facilitate  the expression of Pax3, an important transcription factor that maintains Sox10 [9]. This study produced a major change in theory due to the recognition of HDAC1/2 as induction factors for peripheral glia, including SCs, through their control of lineage-specific transcription factors, like Sox10.

 

Hippo/YAP/TAZ signaling

Figure 2: Control numbers and myelination of Schwann cells contrasts sharply with TAZ/YAP double knockouts (dcKO) on the far right. Image credit: Deng et al., doi: 10.1038/ncomms15161.

Hippo signaling refers to a pathway that affects cell proliferation and a process of controlled cell death known as apoptosis.It is primarily moderated by three factors: Hippo, TAZ, and YAP. TAZ and YAP proteins activate cell cycle regulators to promote proliferation of SC. The two also work with Sox10 to direct differentiation regulators for myelination [3]. TAZ and YAP were found to regulate SC proliferation via the control of cell cycle regulators and regulate SC myelination through interactions with Sox10. The direct targets of these two proteins are still not fully understood, but immunolabeling results published from Deng et al. in 2017 revealed TAZ and YAP to be necessary for SC proliferation and myelin induction. Single knockouts , which disable the target gene from functioning in an organism, for TAZ or YAP in mice showed how these two proteins can compensate for one another’s expression to produce normal SC phenotypes if only one factor is present. In contrast, double knockout mice showed a dramatic reduction in mature SCs due to decreased Sox10 expression [1]. The research that led to understanding the interplay of TAZ and YAP in this signaling network is key to SC developmental theory because the prognosis of mice without these genes is so bleak. Individuals without the proper number of mature SCs are not viable for long after birth due to severe motor and sensory deficits. These unfortunate phenotypes show the importance of TAZ and YAP in SC development as well as the importance of SCs to vertebrate life.

 

Further neuregulin-1/ErbB signaling analysis 

Research on the neuregulin-1/ErbB signaling mechanism is key to understanding SC differentiation because this signaling makes important decisions early in differentiation that completely change the fate of a young NCC. 

A closer look at Shah et al.’s 1994 Cell publication reveals advanced techniques that set an impressive foundation for future NCC studies. In order to understand neuregulin’s role in SC development, the paper aimed to answer the following question: When do NC-derived cells first start responding to neuregulins? Scientists used antibody staining, or immunocytochemistry, to fluorescently label NCCs with key proteins to track SC development as well as neuronal development for contrast. They stained for three proteins: 1) glial fibrillary acidic protein (GFAP) to tag immature SCs, 2) a mature Schwann cell-associated tyrosine kinase, c-Neu, and 3) peripherin to track cells developing into neurons. By analyzing NCC colonies from their early, undifferentiated state, scientists were able to study neuregulin’s instructional role in SC development.

The study found colonies of NCCs grown in neuregulin-1 to have no peripherin staining but intense GFAP staining, indicative of high levels of developing SCs and no developing neurons. This result suggests that the presence of neuregulin guides NCCs towards SC development and away from neuronal development. To confirm this result, scientists tested neuregulin-positive cell colonies for two additional Schwann cell-specific markers: P0 and O4. The presence of these protein markers in the neuregulin-treated colonies precisely identifies the cells as SCs, thus confirming neuregulin’s role in exclusively instructing SC differentiation. To confirm neuregulin’s unique role in promoting SC differentiation while suppressing neuronal differentiation, Shah et al. conducted careful colony analysis to confirm that the control and neuregulin-treated colonies had similar percentages of colony survival. This result shows that neurons were inhibited from ever forming in the first place rather than later degrading in neuregulin-positive media [2]. This experiment used rigorous colony analysis and previously validated protein markers to confirm their hypothesis of neuregulin’s two-pronged effects in directing NCC fate. 

Yang et al.’s 2020 publication work builds upon Shah’s 1994 publication to examine the interplay between neuregulin and Sox10. This research is important because the study of individual protein factors is not enough to understand the complex control of a developmental process such as SC differentiation. The scientists isolated three groups of bone marrow mesenchymal stem cells (BMSCs) from mice: a control cell group with standard induction factors, a group in which neuregulin’s main receptor was blocked, and one group that never received any neuregulin. RT-PCR was used to quantify the amount of key transcriptional regulator proteins as a measure of the degree of stem cell differentiation into Schwann-like cells. The most significant results come from the ErbB2-inhibited cells treated with immunofluorescence staining, which exhibited significant decreases in multiple SC markers, large reductions in SC proliferation, and a significant decrease in Sox10 expression. These results show that ErbB2 and neuregulin are required to work together to induce stem cell differentiation into SCs. Further analysis of these results uncovered a positive feedback loop between neuregulin and Sox10, meaning the two factors’ reaction yields high amplification of the signals required to quickly create mature SCs in the developing embryo [6].

 

Conclusion

Current state of theory

As of this year, the entire path from NCCs to SCs is still not completely understood. It is difficult to create conditions that allow both NCC induction and the maintenance of their undifferentiated state. A second difficulty arises because new proteins are frequently added to a puzzle of cell signaling that is also not yet fully understood. There are only a few transcriptional regulators of NCCs that have been studied in detail, and little is known about the products of effector genes for the migration of NCCs [3]. As established in this review, at least four major cell signaling pathways are described to affect SC development. However, a comprehensive map of how these different pathways communicate and combine to deliver the product of mature SCs is not yet defined. Sox10 continues to be a key factor in SC differentiation, and its regulation proves to be more and more complicated with each new protein factor discovery. Nonetheless, the collective of rigorous science allows clarity to be found bit by bit, and there is great potential in the future of NCC-based therapeutics. 

Figure 3: Summary of the molecular interactions between Sox10 and the four protein factors described in this review, guiding NCCs through the process of SC differentiation.

Important questions for future research

A key area of inquiry examines how this research on animal models might translate to regenerative medicine and stem cell-based therapeutics in humans. To be therapeutic for regenerative medicine, induced NCCs need to have as close to normal differentiation and population patterns as possible. This involves future research on functional comparisons between NC-derived stem cells in postnatal organisms and embryonic stem cells [5]. SCs hold a high potential for regenerative medicine due to their natural role in axonal regrowth following peripheral nerve damage [10]. The applications of this research are exciting, but there is still a long way to go in understanding the wide range of applicable protein activities. 

Understanding normal patterns of SC development will help develop treatments for abnormal patterns, like SC tumors. Uncontrollable SC differentiation is a known characteristic of some cancers [1]. Recent research has identified a tumor suppressor, Nf2, that leads to Schwannomas and hyperplasia in mouse models when inactivated [3]. In addition, future manipulation of the Hippo signaling described in this article could compensate for myelin insufficiency without risking an overproduction of myelin that may lead to tumors [1]. The wide range of proteins described thus far as regulators ofSox10’s activity demonstrates the importance of continued funding for basic SC research. Finally, the modular content of this review supports the importance of further studies that focus on the interplay between cell signaling pathways to one day obtain a highly detailed, web-like recipe for SC differentiation.

 

References:

  1. Deng, Y., Wu, L. M. N., Bai, S., Zhao, C., Wang, H., Wang, J., et al. 2017. “A reciprocal regulatory loop between TAZ/YAP and G-protein Gas regulates Schwann cell proliferation and myelination.” Nat. Commun. 8, 1–15. doi: 10.1038/ncomms15161.
  2. Shah, N. M., Marchionni, M. A., Isaacs, I., Stroobant, P., & Anderson, D. J. 1994. “Glial Growth Factor Restricts Mammalian Neural Crest Stem Cells to a Glial Fate.” Cell, 77, 349-360.
  3. Méndez-Maldonado, K., Vega-López, G. A., Aybar, M. J., & Velasco, I. 2020. “Neurogenesis From Neural Crest Cells: Molecular Mechanisms in the Formation of Cranial Nerves and Ganglia.” Frontiers in Cell and Developmental Biology, 8: 1-15. doi:10.3389/fcell.2020.00635.
  4. Kunisada, T., Tezulka, K., Aoki, H., & Motohashi, T. 2014. “The stemness of neural crest cells and their derivatives.” Birth Defects Research Part C: Embryo Today: Reviews, 102(3), 251-262. doi:10.1002/bdrc.21079
  5. Perera, S. N., & Kerosuo, L. 2020. “On the road again – establishment and maintenance of stemness in the neural crest from embryo to adulthood.” Stem Cells Journals. doi:https://doi.org/10.1002/stem.3283
  6. Yang, X., Ji, C., Liu, X., Zheng, C., Zhang, Y., Shen, R., & Zhou, Z. 2020. “The significance of the neuregulin-1/ErbB signaling pathway and its effect on Sox10 expression in the development of terminally differentiated Schwann cells in vitro.” International Journal of Neuroscience, 1-10. doi:10.1080/00207454.2020.1806266
  7. Buchstaller J, Sommer L, Bodmer M, et al. 2004. “Efficient isolation and gene expression profiling of small numbers of neural crest stem cells and developing Schwann cells.” Journal of Neuroscience. 24: 2357-2365.
  8. Jagalur NB, Ghazvini M, Mandemakers W, et al. 2011. “Functional dissection of the Oct6 Schwann cell enhancer reveals an essential role for dimeric Sox10 binding.” Journal of Neuroscience.;31(23):8585–8594.
  9. Jacob, C., Lotscher, P., Engler, S., Baggiolini, A., Tavares, S. V., Brugger, V., Suter, U. 2014. “HDAC1 and HDAC2 Control the Specification of Neural Crest Cells into Peripheral Glia.” Journal of Neuroscience, 34(17), 6112-6122. doi:10.1523/jneurosci.5212-13.2014.
  10. Nishio, Y., Nishihira, J., Ishibashi, T., Kato, H., & Minami, A. 2002. “Role of Macrophage Migration Inhibitory Factor (MIF) in Peripheral Nerve Regeneration: Anti-MIF Antibody Induces Delay of Nerve Regeneration and the Apoptosis of Schwann Cells.” Molecular Medicine, 8(9), 509-520. doi:10.1007/bf03402160.

Genetic algorithms: An overview of how biological systems can be represented with optimization functions

By Aditi Goyal, Genetics & Genomics, Statistics ‘22

Author’s Note: As the field of computational biology grows, machine learning continues to have larger impacts in research, genomics research in particular. Genetic algorithms are an incredible example of how computer science and biology work hand in hand and can provide us with information that would otherwise take decades to obtain. I was inspired to write a review overviewing genetic algorithms and their impact on biology research after reading a news article about them. This paper is not intended to serve as a tutorial of any kind when it comes to writing a genetic algorithm. Rather, it serves to introduce this concept to someone with little to no background in computer science or bioinformatics. 

 

Introduction

In 2008, Antoine Danchin wrote that “there is more than a crude metaphor behind the analogy between cells and computers.” [1] He also stated that the “genetic program is more than a metaphor and that cells, bacteria, in particular, are Turing machines.” [1] This is the fundamental theory that has been the basis of systems biology and has inspired the development of genetic algorithms. Genetic algorithms (GAs) provide a method to model evolution. They are based on Darwin’s theory of evolution, and computationally create the conditions of natural selection. Using genetic algorithms, one can track the progression of a certain gene or chromosome throughout multiple generations. In this paper, we discuss the components of a genetic algorithm, and how they can be modified and applied for various biological studies.

 

Background

GA’s are an example of a metaheuristic algorithm that is designed to find solutions to NP-hard problems [2, 3]. NP problems, aka Non-deterministic Polynomial-time problems, describe optimization problems that take a polynomial amount of time to solve via a brute force method. This is best understood through an example, the most classic one being the Traveling Salesman Problem [4]. If a salesman has to travel to five different locations, how should he pick the order of his destinations, in order to minimize the distance he travels? The solution to this problem is to calculate the total distance for each combination and pick the shortest route. At five destinations alone, there are 120 possible routes to consider. Naturally, as the number of ‘destinations’ increases, the number of possible routes will increase, as will the time it takes to calculate all options. In more complicated scenarios, such as an evolution prediction system, this problem becomes exponentially more difficult to solve, and therefore requires optimization.

GA’s are “problem independent” optimization algorithms [2, 3]. This means that the parameterization of the algorithm does not depend on any certain situation, and can be modified by the user depending on the situation. This class of optimization algorithms is often referred to as a metaheuristic algorithm. The key idea is that these types of optimization functions trade accuracy for efficiency. Essentially, they aim to approximate a solution using the least amount of time and computing power, as opposed to providing a high degree of accuracy that may take significantly more resources.

 

Components of a Genetic Algorithm

There are only two components essential to creating a GA: a population, and a fitness function [I*]. These two factors are sufficient to create the skeleton of a GA. Withal, most GA’s are modeled after Darwin’s theory of evolution [j*]. They use the components of fitness, inheritance, and natural variation through recombination and mutation to model how a genetic lineage will change and adapt over time [j*]. Therefore, these components must also be incorporated into the GA in order to more accurately mimic natural selection. 

Population

A population is defined using the first generation, or F1. This can be a set of genes, chromosomes, or other units of interest [7].  This generation can be represented in several ways, but the most common technique is to use a bit array where different permutations of 0’s and 1’s represent different genes [7].

Selection & Fitness Functions

Now that a population has been initialized, it needs to undergo selection. During selection, the algorithm needs to select which individuals from the population will be continuing onto the next generation. This is done through the fitness function [3]. The fitness function aims to parameterize the survival of a certain individual within the population and provide a fitness score. This accounts for the fitness of each genetic trait and then computes the probability that the trait in question will continue onwards. The fitness score can be represented in different ways. A common method is using a binary system. For example, consider a chromosome being defined as a set of bits (see Figure 1). A neutral, or wild-type allele can be represented with a zero. A beneficial allele or one that confers some sort of advantage over the wild-type is represented using a 1. The fitness function would then be defined to calculate the fitness of each chromosome. In this example, the fitness is equivalent to the sum of the binary scores.

Chromosomes with a higher fitness score represent chromosomes that have more beneficial traits as compared to chromosomes with lower fitness scores. Therefore, chromosomes that maximize the fitness score will be preferred.

Inheritance & Genetic Variation

The fittest individuals are then propagated onwards to the “breeding” phase, while only a small proportion of the fewer fit individuals are carried forward. This is the step that mimics “natural selection”, as we are selecting for the more fit individuals, and only a small proportion of the fewer fit individuals are surviving due to chance.

Now that the survivors have been identified, we can use GA operators to create the next generation. GA operators are how genetic variation is modeled [7]. As such, the two most common operators in any GA are mutation rates and recombination patterns. The F2 generation is created by pairing two individuals from F1 at random and using our operators to create a unique F2.

Mutations are commonly represented using bit changes [3]. Because our original population was defined in binary, our mutation probability function represents the probability of a bit switch, i.e. the probability that a 0 would switch to a 1, or vice versa. These probabilities are usually quite low and have a minor impact on the genetic variation.

Recombination, or crossovers, is where the majority of new genetic variations arise. These are modeled by choosing a point of recombination, and essentially swapping bit strings at that point. A simple GA uses a single point crossover, where only one crossover occurs per chromosome. However, a GA can easily be adapted to have multiple crossover points [8, 9].

On average, via the mutation and crossover operators, the fitness level of F2 should be higher than F1. By carrying some of the fewer fit individuals, we allow for a larger gene pool and therefore allow for more possibilities for genetic combinations, but the gene pool should be predominated by favorable genes [3].

Termination

This three-step pattern of selection, variation, and propagation is repeated until a certain threshold is reached. This threshold can be a variety of factors, ranging anywhere from a preset number of generations to a certain average fitness level. Typically, termination occurs when population convergence occurs, meaning that the offspring generation is not significantly better than the generation before it [10].

 

Modifications to GA’s

As one can see, this is a rather simplistic approach to evolution. There are several biological factors that remain unaddressed in a three-step process. Consequently, there are many ways to expand a GA to more closely resemble the complexity of natural evolution. The following section shall briefly overview a few of the different techniques used in tandem with a GA to add further resolution to this prediction process.

Speciation

A GA can be combined with a speciation heuristic that discourages crossover pairs between two individuals that are very similar, allowing for more diverse offspring generations [11, 12]. Without this speciation parameter, early convergence is a likely possibility [12]. Early convergence describes the event that the ideal individual, i.e. the individual with the optimized fitness score, is reached in too few generations.

Elitism

Elitism is a commonly used approach to ensure that the fitness level will never decrease from one generation to the next [13]. Elitism describes the decision to carry on the highest-rated individuals from one generation to the next with no change [13, 14]. Elitism also ensures that genetic information is not lost. Since each offspring must be ‘equal or better’ than the generation before it, it is guaranteed that the parental genotypes will carry through generations, changing at a much slower rate than a pure GA would model [15].

 

Adaptive Genetic Algorithms 

Adaptive Genetic Algorithms (AGA’s) are a burgeoning subfield of GA development. An AGA will continuously modify the mutation and crossover operators in order to maintain population diversity, while also keeping the convergence rate consistent [16]. This is computationally expensive but often produces more realistic results, especially when calculating the time it would take to reach the optimal fitness. The Mahmoodabadi et al team compared AGA’s to 3 other optimization functions and found that “AGA offers the highest accuracy and the best performance on most unimodal and multimodal test functions” [17].

 

Interactive Genetic Algorithms

As previously stated, the fitness function is critical to creating a GA. However, there arise several instances where a fitness function cannot be accurately defined. This is particularly true for species that have elaborate mating rituals, as that is a form of selection that would be computationally expensive to recreate. In these situations, one can use an interactive genetic algorithm (IGA). IGA’s operate in a similar fashion to GA’s, but they require user input at the fitness calculation point.

While this method does provide some way of modeling a population without having a predefined fitness function, it has glaring drawbacks. Primarily, this process is not feasible for large populations, as it puts the burden of calculating the fitness on the user, and it also leaves room for subjective bias from the user. However, this subjective component been viewed as an advantage in several fields, particularly the fashion industry [18]. Designers have been investigating IGA’s as a method to generate new styles, as the algorithm depends on user validation of what is considered to be a good design versus a bad one [18].

 

Applications

Genetic algorithms have a far-reaching effect on computational efforts in every field, especially in biology. As the name suggests, genetic algorithms have a huge impact on evolutionary biology, as they can assist with phylogeny construction for unrooted trees [19]. Oftentimes, evolutionary data sets are incomplete. This can result in billions of potential unrooted phylogenetic trees. As the Hill et al team describes, “for only 13 taxa, there are more than 13 billion possible unrooted phylogenetic trees,” [19].

Testing each of these combinations and determining the best fit is yet another example of an optimization problem– one which a GA can easily crack. Hill et al applied a GA to a set of amino acid sequences and built a phylogenetic tree comparing protein similarities [19]. They found that a program called Phanto, “infers the phylogeny of 92 taxa with 533 amino acids, including gaps in a little over half an hour using a regular desktop PC” [19].

Similarly, the Wong et al team tackled the infamous protein folding prediction problem using GA’s [20]. They used the HP Lattice model to simplify a protein structure and used the iterative nature of a GA to find a configuration that minimized the energy required to fold a protein into that shape. The HP Lattice model stands for Hydrophobic Polar Lattice and seeks to model the hydrophobicity interactions that occur between different amino acid residues in the secondary structure of a protein [20]. They found that a GA performed better than some of the newer protein folding predictive programs available today [20].

GA’s are an incredible tool for cancer research as well. The Mitra et al team used a GA to study bladder cancer [21]. They conducted quantitative PCR on tissue samples from 65 patients and identified 70 genes of interest. Of these 70 genes, three genes in particular, were identified in a novel pathway. They discovered that ICAM1 was up-regulated relative to MAP2K6, while MAP2K6 was up-regulated relative to KDR. This pathway was considered to be novel because individually, all three genes displayed no signs of significant changes in regulation. By applying a GA, the Mitra team was able to identify this pattern between all three genes. Uncoincidentally, “ICAM1 and MAP2K6 are both in the ICAM1 pathway, which has been reported as being associated with cancer progression, while KDR has been reported as being associated with the vascularization supporting tumors” [21, 22, 23].

Another groundbreaking discovery was made by applying GA’s to p53 research. P53 is an essential tumor suppressor [24]. Most cancerous tumors can be attributed, in part, to a mutation in the p53 gene, making it an excellent candidate for oncology research. The Millet et al team investigated a possible p53 gene signature for breast cancer, hoping to find an accurate prediction system for the severity of breast cancer [25]. They analyzed 251 transcriptomes from patient data and found a 32 gene signature that could serve as a predictor for breast cancer severity [23, 25]. They also found that “the p53 signature could significantly distinguish patients having more or less benefit from specific systemic adjuvant therapies and locoregional radiotherapy,” [25].

GA’s have also had a huge impact on immunology, vaccine development in particular. Licheng Jiao and Lei Wang developed a new type of GA called the Immunity Genetic Algorithm [26]. This system mimics a typical GA but adds a two-step ‘immunological’ parameter (Figure 3). Much like a GA, the fitness function is applied to a population, which then triggers mutation and crossover. However, after these steps, the program mimics  ‘vaccination’ and ‘immune selection. These two steps are referred to as the “Immune Operator” [26]. They are designed to raise a genetic advantage in individuals who respond well to the vaccine and confer a disadvantage to those with a ‘weaker’ immune response. In essence, the vaccination step acts as a secondary mutation, as it is acting as an external change factor in each individual’s fitness. Similarly, the ‘immune selection’ step acts as a secondary fitness function, as it measures the immune response post-vaccine. If evolution is continuing as it should, each generation should have an improved immune response to the vaccine until convergence is reached.

Conclusion

GA’s have a broad reach in all fields of research, from fashion to immunology. Their success is due to three critical components underlying their programming: they are simple to write, easy to customize, and efficient to run. This flexibility and independence are what will allow programs like GA’s to become commonplace in research, across all disciplines. In particular, as biology research continues to merge with computer science and advanced modeling techniques, applications like GA’s have the potential to solve problems and raise questions about our world that we may have never imagined before.

 

References:

  1. Danchin, Antoine. “Bacteria as computers making computers.” FEMS microbiology reviews vol. 33,1 (2009): 3-26. doi:10.1111/j.1574-6976.2008.00137.x
  2. Said, Gamal Abd El-Nasser A., Abeer M. Mahmoud, and El-Sayed M. El-Horbaty. “A comparative study of meta-heuristic algorithms for solving quadratic assignment problem.” arXiv preprint arXiv:1407.4863 (2014).
  3. Marek Obitko, “III. Search Space.” Search Space, courses.cs.washington.edu/courses/cse473/06sp/GeneticAlgDemo/searchs.html. 
  4. Hoffman K.L., Padberg M. (2001) Traveling salesman problem. In: Gass S.I., Harris C.M. (eds) Encyclopedia of Operations Research and Management Science. Springer, New York, NY. https://doi.org/10.1007/1-4020-0611-X_1068
  5. Whitley, D. A genetic algorithm tutorial. Stat Comput 4, 65–85 (1994). https://doi.org/10.1007/BF00175354
  6. McCall, John. “Genetic algorithms for modelling and optimisation.” Journal of computational and Applied Mathematics 184.1 (2005): 205-222.
  7. Carr, Jenna. An Introduction to Genetic Algorithms. Whitman University, 2014, www.whitman.edu/Documents/Academics/Mathematics/2014/carrjk.pdf. 
  8. Hassanat, Ahmad et al. “Choosing Mutation and Crossover Ratios for Genetic Algorithms—A Review with a New Dynamic Approach.” Information 10.12 (2019): 390. Crossref. Web.
  9. Gupta, Isha, & Anshu Parashar. “Study of Crossover operators in Genetic Algorithm for Travelling Salesman Problem.” International Journal of Advanced Research in Computer Science [Online], 2.4 (2011): 194-198. Web. 17 May. 2021
  10. Alkafaween, Esra’a. (2015). Novel Methods for Enhancing the Performance of Genetic Algorithms. 10.13140/RG.2.2.21478.04163. 
  11. Duda, J.. “Speciation, clustering and other genetic algorithm improvements for structural topology optimization.” (1996).
  12. Duda, J. W., and Jakiela, M. J. (March 1, 1997). “Generation and Classification of Structural Topologies With Genetic Algorithm Speciation.” ASME. J. Mech. Des. March 1997; 119(1): 127–131.
  13. J. A. Vasconcelos, J. A. Ramirez, R. H. C. Takahashi and R. R. Saldanha, “Improvements in genetic algorithms,” in IEEE Transactions on Magnetics, vol. 37, no. 5, pp. 3414-3417, Sept. 2001, doi: 10.1109/20.952626.
  14. Ishibuchi H., Tsukamoto N., Nojima Y. (2008) Examining the Effect of Elitism in Cellular Genetic Algorithms Using Two Neighborhood Structures. In: Rudolph G., Jansen T., Beume N., Lucas S., Poloni C. (eds) Parallel Problem Solving from Nature – PPSN X. PPSN 2008. Lecture Notes in Computer Science, vol 5199. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-87700-4_46
  15. Thierens, Dirk. Selection schemes, elitist recombination, and selection intensity. Vol. 1998. Utrecht University: Information and Computing Sciences, 1998.
  16. C. Lin, “An Adaptive Genetic Algorithm Based on Population Diversity Strategy,” 2009 Third International Conference on Genetic and Evolutionary Computing, 2009, pp. 93-96, doi: 10.1109/WGEC.2009.67.
  17. Mahmoodabadi, M. J., and A. R. Nemati. “A novel adaptive genetic algorithm for global optimization of mathematical test functions and real-world problems.” Engineering Science and Technology, an International Journal 19.4 (2016): 2002-2021.
  18. Cho, Sung-Bae. “Towards creative evolutionary systems with interactive genetic algorithm.” Applied Intelligence 16.2 (2002): 129-138.
  19. Hill, Tobias, et al. “Genetic algorithm for large-scale maximum parsimony phylogenetic analysis of proteins.” Biochimica et Biophysica Acta (BBA)-General Subjects 1725.1 (2005): 19-29.
  20. van Batenburg FH, Gultyaev AP, Pleij CW. An APL-programmed genetic algorithm for the prediction of RNA secondary structure. J Theor Biol. 1995 Jun 7;174(3):269-80. doi: 10.1006/jtbi.1995.0098. PMID: 7545258.
  21. Mitra, A.P., Almal, A.A., George, B. et al. The use of genetic programming in the analysis of quantitative gene expression profiles for identification of nodal status in bladder cancer. BMC Cancer 6, 159 (2006). https://doi.org/10.1186/1471-2407-6-159
  22.  Hubbard AK, Rothlein R. Intercellular adhesion molecule-1 (ICAM-1) expression and cell signaling cascades. Free Radic Biol Med. 2000;28:1379–1386.
  23. Worzel, William P et al. “Applications of genetic programming in cancer research.” The international journal of biochemistry & cell biology vol. 41,2 (2009): 405-13. doi:10.1016/j.biocel.2008.09.025
  24. National Center for Biotechnology Information (US). Genes and Disease [Internet]. Bethesda (MD): National Center for Biotechnology Information (US); 1998-. The p53 tumor suppressor protein. Available from: https://www.ncbi.nlm.nih.gov/books/NBK22268/
  25. Miller LD, Smeds J, Joshy G, Vega VB, Vergara L, Ploner A, Pawitan Y, Hall P, Klaar S, Liu ET, Bergh J. An expression signature for p53 status in human breast cancer predicts mutation status, transcriptional effects, and patient survival. Proceedings of the National Academy of Sciences. 2005 Sep 20;102(38):13550–5.
  26. Jiao, Licheng, and Lei Wang. “A novel genetic algorithm based on immunity.” IEEE Transactions on Systems, Man, and Cybernetics-part A: systems and humans 30.5 (2000): 552-561.

A Neuroimmunological Approach to Understanding SARS-CoV-2

By Parmida Pajouhesh, Neurobiology, Physiology & Behavior ‘23

Author’s Note: The Coronavirus Disease has undoubtedly affected us in many sectors of our lives. There has been a lot of discussion surrounding the respiratory symptoms induced by the disease but less focus on how contracting the disease can result in long-term suffering. As someone who is fascinated by the brain, I wanted to investigate how COVID-19 survivors have been neurologically impacted post-recovery and what insight it can provide on more severe neurological disorders. 

 

The Coronavirus Disease (COVID-19) has drastically changed our lives over the past fifteen months. The viral disease produces mild to severe symptoms, including fever, chills, and nausea. There are individual differences in the length of recovery, typically ranging from 1-2 weeks after contraction [1]. Once recovered, those infected are assumed to be healthy and “back to normal,” but data shows that this is not the case for some COVID-19 survivors. COVID-19 has resulted in more severe long-term effects for patients, greatly affecting their ability to perform daily tasks. Taking a deeper look into the neuroimmunological side effects of COVID-19 can help explain the long-term symptoms experienced by survivors. 

Developing our knowledge of long-term neurological effects on COVID-19 survivors is crucial in understanding the risk of cognitive impairments, including dementia and Alzheimer’s disease [2].

A team led by Dr. Alessandro Padovani at the University of Brescia recruited COVID-19 survivors with no previous neurological disease or cognitive impairment for check-ins six months after infection [3]. The exam assessed motor and sensory cranial nerves and global cognitive function. The results showed that the most prominent symptoms were fatigue, memory complaints, and sleep disorder.  Notably, these symptoms were reported much more frequently in patients who were older in age and hospitalized for a longer period of time [3]. 

Other symptoms reported include “brain fog,” a loss of taste or smell, and brain inflammation [2]. Researchers hypothesize that the virus does not necessarily need to make its way inside neurons to result in “brain fog” but instead claim that it is an attack on the sensory neurons, the nerves that extend from the spinal cord throughout the body to gather information from the external environment. When the virus hijacks nociceptors, neurons that are specifically responsible for sensing pain, symptoms like brain fog can follow [4].

Theodore Price, a neuroscientist at the University of Texas at Dallas, investigated the relationship between nociceptors and angiotensin-converting enzyme 2 (ACE2), a protein embedded in cell membranes that allows for viral entry when the spike protein of SARS-CoV-2 binds to it [4, 5]. The nociceptors live in clusters around the spinal cord, which are called dorsal root ganglia (DRG). Price determined that a set of DRG neurons did contain ACE2, enabling the virus to enter the cells. The DRG neurons that contained ACE2 had messenger RNA for the sensory protein MRGPRD, which marks neurons with axons concentrated at the skin, inner organs and lungs. If sensory neurons are infected with the virus, it can result in long-term consequences. It might not be the case that the virus is directly entering the brain and infecting the sensory neurons. Alternatively, it is the immune response triggering an effect on the brain, which leads to the breakdown of the blood-brain barrier surrounding the brain [6]. While this area of research is still under investigation, studies have shown that the breakdown of the blood-brain barrier and lack of oxygen to the brain are hallmarks of Alzheimer’s disease and dementia. Scientists are tracking global function to further understand the impact of COVID-19 treatments and vaccines on these neurological disorders. 

Understanding whether the cause of neurological symptoms is viral brain infection or immune activity is important to clinicians who provide intensive care and prescribe treatments [2, 6]. With future studies, researchers plan to further examine the causes of these symptoms. This knowledge will hopefully provide COVID-19 survivors with adequate support to combat these difficulties and reduce their risk of developing a more severe neurological disorder in the future.

 

References :

  1. Sissons, Beth. 2021. “What to Know about Long COVID.” Medical News Today. www.medicalnewstoday.com/articles/long-covid#diagnosis
  2. Rocheleau, Jackie. 2021. “Researchers Are Tracking Covid-19’s Long-Term Effects On Brain Health.” Forbes. www.forbes.com/sites/jackierocheleau/2021/01/29/researchers-are-tracking-covid-19s-long-term-effects-on-brain-health/?sh=59a0bb284303
  3. George, Judy. 2021. “Long-Term Neurologic Symptoms Emerge in COVID-19.” MedPage Today. www.medpagetoday.com/infectiousdisease/covid19/90587
  4. Sutherland, Stephani. 2020. “What We Know So Far about How COVID Affects the Nervous System.” Scientific American. www.scientificamerican.com/article/what-we-know-so-far-about-how-covid-affects-the-nervous-system
  5. Erausquin, Gabriel A et al. 2021. “The Chronic Neuropsychiatric Sequelae of COVID‐19: The Need for a Prospective Study of Viral Impact on Brain Functioning.” Alzheimer’s & Dementia. Crossref, doi:10.1002/alz.12255  
  6. Marshall, Michael. 2020. “How COVID-19 Can Damage the Brain.” Nature. www.nature.com/articles/d41586-020-02599-5?error=cookies_not_supported&code=5b856480-d7e8-4a22-9353-9000e12a8962  

Psychedelics Herald New Era of Mental Health

By Macarena Cortina, Psychology ‘21 

Author’s Note: As a psychology major who used to be a plant biology major, I’m very interested in the arenas where these two fields interact. Such is the case with psychoactive plants and fungi that produce significant alterations in brain chemistry and other aspects of the human psyche. That is why I chose to write about psychedelics and their rebirth in both research and culture. In the past few months, I have seen increasing media coverage of new scientific findings about these substances, as well as legal advancements in their decriminalization, making this a relevant topic in the worlds of psychology and ethnobotany. The history of psychedelics is a long and complicated one, but here I attempt to cover the basics in hopes of demystifying these new powerful therapeutic treatments and informing readers about the latest horizon in mental health. 

 

After decades in the dark, psychedelic drugs are finally resurfacing in the world of science and medicine as potential new tools for mental health treatment. Psychedelics, otherwise known as hallucinogens, are a class of psychoactive substances that have the power to alter mood, perception, and cognitive functions in the human brain. They include drugs such as LSD, magic mushrooms, ayahuasca, MDMA, and peyote [1]. The US has a long and complex history with these drugs, and the resulting criminalization and stigma associated with them have kept psychedelics in the shadows for many years. However, a major shift in society’s opinions of psychedelics is taking place, and a reawakening is happening in the scientific community. Researchers from various disciplines are becoming increasingly interested in unlocking the therapeutic powers of these compounds, especially for those who are diagnosed with mental disorders and are resistant to the treatments that are currently available for them. Whether or not the world is ready for it, the psychedelic renaissance has begun.

Psychedelics have been used by Indigenous communities around the world as part of their cultural, spiritual, and healing traditions for thousands of years. In the Western world, psychedelics were rediscovered in the 1940s by Swiss chemist Albert Hofmann, who accidentally absorbed LSD through his skin while conducting tests for a potential medicine [2]. What followed was an “uninterrupted stream of fantastic pictures, extraordinary shapes, with intense, kaleidoscopic play of colors” [7]. Once LSD was disseminated throughout the world, psychologists began to experiment with it as a psychotomimetic, or a drug that mimics psychosis, in hopes of gaining a better understanding of schizophrenia and similar mental disorders [2, 3]. In the 1950s, as a result of the US government’s fear that communist nations were using mind control to brainwash US prisoners of war, the CIA carried out the top-secret project MK-Ultra, drugging even unwitting subjects with psychedelics in an attempt to learn about potential mind control techniques [4]. Recreational use of psychoactive substances proliferated in the counterculture movement of the 1960s, eventually leading to their criminalization and status as Schedule 1 drugs [5]. This classified them as substances with no medical value and a high potential for abuse—two descriptors we know are not factual [6].  

Now, people seem to be reevaluating their outlook on these formerly demonized drugs and are instead looking for ways to harness psychedelics’ medicinal properties for mental and physical improvement. Momentum is building quickly. Clinical trials are beginning to show real potential in the use of psychedelics for the treatment of depression, anxiety, post-traumatic stress disorder (PTSD), addiction, eating disorders, and emotional suffering caused by diagnosis of a terminal illness. The US Food and Drug Administration (FDA) has already approved the use of ketamine for therapeutic purposes with MDMA and psilocybin set to follow [7]. Psilocybin has also been decriminalized in cities across the US and was completely legalized for medical use in the entire state of Oregon in November 2020. Entrepreneurs and investors are flocking to startups such as MAPS Public Benefit Corporation and Compass Pathways, which are currently developing psychedelic drugs for therapeutic application. Research centers have been cropping up across the country as well, even at prestigious institutions like John Hopkins School of Medicine and Massachusetts General Hospital. 

So how do psychedelics work? In truth, scientists still don’t know exactly what happens to neural circuitry under the influence of these mind-altering drugs. While more research is required to fully understand how psychedelics affect the brain, there are some findings that help clarify this mystery. For example, the major group of psychedelics—called the “classic psychedelics”—closely resembles the neurotransmitter serotonin in terms of molecular structure [8]. This group includes psilocin, one of the important components of magic mushrooms; 5-MeO-DMT, which is present in a variety of plant species and at least one toad species; and LSD, also known as acid [8]. What they all have in common is a tryptamine structure, characterized by the presence of one six-atom ring linked to a five-atom ring [8]. This similarity lends itself to a strong affinity between these psychedelics and serotonin receptors in the cerebral cortex, particularly the receptor 5-HT2A [8]. The implication of this is that psychedelics can have a significant and widespread influence on brain chemistry, given that serotonin is one of the main neurotransmitters in the brain and plays a major role in mood regulation [9].

What follows is a poorly understood cascade of effects that causes disorganized activity across the brain [10]. At the same time, it seems that the brain’s default-mode network gets inhibited. British researcher Robin Carhart-Harris recently discovered this by dosing study participants with either psilocybin or LSD and examining their neural activity with the help of fMRI (functional magnetic resonance imaging). Rather than seeing what most people expected—an excitation of brain networks—Dr. Carhart-Harris found a decrease of neuronal firing in the brain, specifically in the default-mode network. According to Michael Pollan, author of the best-selling book on psychedelics How to Change Your Mind, this network is a “tightly linked set of structures connecting the prefrontal cortex to the posterior cingulate cortex to deeper, older centers of emotion and memory.” Its function appears to involve self-reflection, theory of mind, autobiographical memory, and other components that aid us in creating our identity. In other words, the ego—the conscious sense of self and thus the source of any self-destructive thoughts that may arise—seems to be localized in the default-mode network. This network is at the top of the hierarchy of brain function, meaning it regulates all other mental activity [10].

Therefore, when psychedelics enter the system and quiet the default-mode network, suddenly new and different neural pathways are free to connect, leading to a temporary rewiring of the brain [10]. In many cases, this disruption of normal brain functioning has reportedly resulted in mystical, spiritual, and highly meaningful experiences. Psychedelics facilitate neuroplasticity, thereby helping people break negative thinking patterns and showing them—even temporarily—that it’s possible to feel another way or view something from a different (and more positive) perspective. 

This kind of experience can be immensely helpful to someone who is struggling with a mental health disorder and needs a brain reset. While other techniques, such as meditation and general mindfulness, can help cultivate a similar feeling, they require much more time and effort, something that is not always feasible—and never easy—for those who are severely struggling with their mental health [10]. Psychedelics can help jump-start the process of healing, and their effects can be made even more powerful and long-lasting when coupled with psychotherapy [11]. Talking with a psychiatrist or psychologist after the drug treatment can help integrate and solidify a client’s newly acquired thinking patterns [11]. 

In a study published in The New England Journal of Medicine in April 2021, researchers found that psilocybin works at least as well as leading antidepressant escitalopram [12]. In this double-blind, randomized, controlled trial, fifty-nine participants with moderate-to-severe depression took either psilocybin or escitalopram, along with a placebo pill in both cases. After six weeks, participants in both groups exhibited lower scores on the 16-item Quick Inventory of Depressive Symptomatology–Self-Report (QIDS-SR-16), indicating an improvement in their condition. The difference in scores between the two groups was not statistically significant, meaning that a longer study with a larger sample size is still required to show if there is an advantage to treating depression with psilocybin over conventional drugs [12]. However, one notable difference was that psilocybin seems to take effect faster than escitalopram [13]. As an SSRI (selective serotonin reuptake inhibitor), escitalopram takes a couple months to work, something that’s not helpful for those with severe depression. Psilocybin, then, is suggested to provide more immediate relief to people battling depression [13]. 

In June 2020, a team of researchers at John Hopkins published a meta-analysis of nine clinical trials concerning psychedelic-assisted therapy for mental health conditions such as PTSD, end-of-life distress, depression, and social anxiety in adults with autism [14]. These were all the “randomized, placebo-controlled trials on psychedelic-assisted therapy published [in English] after 1993.” The psychedelics in question included LSD, psilocybin, ayahuasca, and MDMA. Following their statistical meta-analysis of these trials, they found that the “overall between-group effect size at the primary endpoint for psychedelic-assisted therapy compared to placebo was very large (Hedges g = 1.21). This effect size reflects an 80% probability that a randomly selected patient undergoing psychedelic-assisted therapy will have a better outcome than a randomly selected patient receiving a placebo” [14]. 

There were only minimal adverse effects reported from this kind of therapy and no documentation of serious adverse effects [14]. When compared to effect sizes of pharmacological agents and psychotherapy interventions, the effects of psychedelic-assisted therapy were larger, especially considering the fact that participants received the psychedelic substance one to three times prior to the primary endpoint, as opposed to daily or close-to-daily interventions with psychotherapy or conventional medications. Overall, results suggest that psychedelic-assisted therapy is effective—with minimal adverse effects—and presents a “promising new direction in mental health treatment” [14].

At UC Davis, researchers in the Olson Lab recently engineered a drug modeled after the psychedelic ibogaine [15]. This variant, called tabernanthalog (TBG), was designed to induce the therapeutic effects of ibogaine minus the toxicity or risk of cardiac arrhythmias that make consuming ibogaine less safe. TBG is a non-hallucinogenic, water-soluble compound that can be produced in merely one step. In an experiment performed with rodents, “tabernanthalog was found to promote structural neural plasticity, reduce alcohol- and heroin-seeking behavior, and produce antidepressant-like effects.” These effects should be long lasting given that TBG has the ability to modify the neural circuitry related to addiction, making it a much better alternative to existing anti-addiction medications. And since the brain circuits involved in addiction overlap with those of conditions like depression, anxiety, and post-traumatic stress disorder, TBG could help treat various mental health issues [15]. 

As the psychedelic industry begins to emerge, members of the psychedelic community are voicing their concerns about the risks that come with rapid commercialization [7]. Biotech companies, researchers, and therapists should be careful about marketing psychedelics as a casual, quick fix to people’s problems. Psychedelics can occasion intense and profound experiences and should be consumed with the right mindset, setting, and guidance. There are still many unknowns about psychedelic use, especially its long-term effects. Not all individuals should try treatment with psychedelics, especially those with a personal or family history of psychosis. It will also be important to move forward in a way that is respectful to Indigenous traditions and accessible to all people—particularly people of color—without letting profit become the main priority. Some advocates worry that commercialization and adoption into a pharmaceutical model might strip psychedelics of their most powerful transformational benefits and that they will wind up being used merely for symptom resolution [7]. As long as psychedelics’ reintroduction to mainstream medicine is handled mindfully, the world may soon have a new avenue for effective mental health therapy that honors its Indigenous heritage and is accessible to all. 

 

References:

  1. Alcohol & Drug Foundation. Psychedelics. October 7, 2020. Available from https://adf.org.au/drug-facts/psychedelics/
  2. Williams L. 1999. Human Psychedelic Research: A Historical And Sociological Analysis. Cambridge University: Multidisciplinary Association for Psychedelic Studies. 
  3. Sessa B. 2006. From Sacred Plants to Psychotherapy:The History and Re-Emergence of Psychedelics in Medicine. Royal College of Psychiatrists.
  4. History. MK-Ultra. June 16, 2017. Available from https://www.history.com/topics/us-government/history-of-mk-ultra
  5. Beres D. Psychedelic Spotlight. Why Are Psychedelics Illegal? October 13, 2020. Available from https://psychedelicspotlight.com/why-are-psychedelics-illegal/
  6. United States Drug Enforcement Administration. Drug Scheduling. Available from https://www.dea.gov/drug-information/drug-scheduling.
  7. Gregoire C. NEO.LIFE. Inside the Movement to Decolonize Psychedelic Pharma. October 29, 2020. Available from https://neo.life/2020/10/inside-the-movement-to-decolonize-psychedelic-pharma/
  8. Pollan M. How to Change Your Mind: What the New Science of Psychedelics Teaches Us About Consciousness, Dying, Addiction, Depression, and Transcendence. New York: Penguin Press; 2018.
  9. Bancos I. Hormone Health Network. What is Serotonin? December 2018. Available from https://www.hormone.org/your-health-and-hormones/glands-and-hormones-a-to-z/hormones/serotonin#:~:text=Serotonin%20is%20the%20key%20hormone, sleeping%2C%20eating%2C%20and%20digestion
  10. Pollan M, Harris S, Silva J, Goertzel B. December 11, 2020. Psychedelics: The scientific renaissance of mind-altering drugs. YouTube: Big Think. 1 online video: 20 min, sound, color. 
  11. Singer M. 2021. Trip Adviser.Vogue. March issue: 198-199, 222-224. 
  12. Carhart-Harris R, Giribaldi B, Watts R, Baker-Jones M, Murphy-Beiner A, Murphy R, Martell J, Blemings A, Erritzoe D, Nutt DJ. 2021. Trial of Psilocybin versus Escitalopram for Depression. N Engl J Med [Internet]. 384:1402-1411. doi: 10.1056/NEJMoa2032994.
  13. Lee YJ. Business Insider Australia. A landmark study shows the main compound in magic mushrooms could rival a leading depression drug. April 14, 2021. Available from https://www.businessinsider.com.au/psilocybin-magic-mushroom-for-depression-takeaways-from-icl-report-nejm-2021-4
  14. Luoma JB, Chwyl C, Bathje GJ, Davis AK, Lacelotta R. 2020. A Meta-Analysis of Placebo-Controlled Trials of Psychedelic-Assisted Therapy. Journal of Psychoactive Drugs [Internet]. 52(4):289-299. doi: 10.1080/02791072.2020.1769878.
  15. Cameron LP, Tombari RJ, Olson DE, et al. 2020. A non-hallucinogenic psychedelic analogue with therapeutic potential. Nature [Internet]. 589:474–479. https://doi.org/10.1038/s41586-020-3008-z.

Oral Microbiome Imbalances Could Provide Early Warning of Disease

Image caption: Fragments of amyloid precursor protein aggregate in β-amyloid plaques, seen here in dark brown. These plaques have been found in the brains of patients with Alzheimer’s disease. Credit: Wikimedia Commons.

 

By Daniel Erenstein, Neurobiology, Physiology & Behavior ‘21

Author’s Note: I first learned about research on the oral microbiome while covering this year’s annual meeting of the American Association for the Advancement of Science in February. Under the theme of “Understanding Dynamic Ecosystems,” the conference, which was held virtually, welcomed scientists, journalists, students, and science enthusiasts for four days of sessions, workshops, and other talks. The human microbiome, home to trillions of bacteria and other microbes, is as dynamic an ecosystem as they come. This article focuses on the bacteria that live in our mouths and their fascinating role in diseases such as diabetes and chronic kidney disease. With this article, I hope that readers consider further reading on the diverse, lively ecosystems within our bodies. For more stories on the microbiome, The Aggie Transcript is a great place to start.

 

There is more to the oral microbiome than meets the mouth. Established within a few minutes of birth, this diverse community of bacteria, fungi, and other microbes lives on every surface of our mouths throughout our lives [1]. For decades, scientists have researched these microorganisms and their role in dental diseases.

But far less is known about the interactions of these bacteria and their products with other parts of the body, and these interactions could hold a particularly important role in human health.

“We’ve always thought of the mouth as somehow in isolation, that oral health does not somehow impact the rest of the body,” said Purnima Kumar, DDS, MS, PhD, professor of periodontology at Ohio State University, during a session at the American Association for the Advancement of Science annual meeting that took place on February 8 [2].

Scientists, though, are increasingly looking to the oral microbiome for answers to questions about health and disease [3].

“The time is absolutely right for us to start putting the mouth back into the body,” Kumar said during the session “Killer Smile: The Link Between the Oral Microbiome and Systemic Disease.”

Panelists highlighted three systemic diseases — diabetes mellitus, rheumatoid arthritis, and Alzheimer’s disease — and their unexpected connection to disturbances in the oral microbiome [4-6]. A common thread running through all three diseases is an association with periodontitis, a gum disease triggered by the accumulation of bacteria, viruses, and even fungi in dental biofilm, or plaque, on the surface of teeth [7].

In health, there is peace between oral microbes and our immune system. Healthy and frequent communication — molecular diplomacy between bacteria and immune surveillance — maintains stable relations. But microbial imbalances due to bacterial buildup in plaque can cause inflammatory immune reactions, resulting in the gradual breakdown of the barrier between biofilm and gum tissue.

“When you have gum disease, that crosstalk, that communication, that harmony is broken down,” Kumar said.

When bacteria subsequently invade our gum tissues, there are consequences for human disease [8].

Mark Ryder, DMD, professor of periodontology at University of California, San Francisco, studies the role of one such bacterium, Porphyromonas gingivalis, in disease [9]. This bacterium secretes enzymes called gingipains, which are essential for its survival.

In the case of Alzheimer’s disease, these gingipains can travel through the bloodstream, cross the blood-brain barrier, and accumulate in brain regions like the hippocampus, which is involved in memory. There, they help break down an embedded membrane protein called amyloid precursor protein into fragments, which group together in deposits found in people with Alzheimer’s disease.

Further study of gingipains and other microbial products could provide insight into a “critical early event in the initiation and progression of Alzheimer’s disease,” Ryder said.

Similarly, rheumatoid arthritis can be triggered by immune responses to other by-products of P. gingivalis, including protein antibodies that cause joint inflammation, according to Iain Chapple, BDS, PhD, professor of periodontology at the University of Birmingham [5].

Past research on links between the oral microbiome and systemic disease has even shown that these effects can travel a two-way street.

This is apparent in the research of Dana Graves, DDS, DMSc, professor of periodontics at University of Pennsylvania, whose work has examined the effects of diabetes on our microbiome, and vice versa [4].

“Diabetes impacts the mouth in a very profound way,” said Graves, adding that the inflammatory responses to bacteria caused by diabetes lead to disruption of the microbiome.

“This bidirectionality is something we saw first with diabetes, we’ve seen it now with rheumatoid arthritis, and it appears now that we’re starting to see it with chronic kidney disease,” Chapple said. “We need to start really digging down into [biological mechanisms] to understand more about that relationship.”

However, the verdict is still out on how much bacterial products such as gingipains contribute to disease. For Ryder and others, the existing data is insufficient — fully answering that question depends on carefully constructed clinical trials.

“When we’re trying to establish a link between something like the microbiome and the mouth and Alzheimer’s, association studies are important, the actual underlying biological mechanisms are important, but finally what sort of seals the deal, of course, is the actual effects of intervention,” Ryder said.

An ongoing clinical trial with more than 600 patients is evaluating the success of gingipain inhibitors in preventing symptoms of Alzheimer’s disease [10]. The results, expected by the end of this year, could carry implications not just for the treatment of Alzheimer’s disease but any disease with underlying roots in the oral microbiome.

Regardless of the results, it’s clear that breaking out the toothbrush and floss every day is crucial to our overall well-being.

 

References:

  1. Ursell LK, Metcalf JL, Parfrey LW, Knight R. 2012. Defining the human microbiome. Nutr Rev. 70 Suppl 1: S38-S44. https://doi.org/10.1111/j.1753-4887.2012.00493.x.
  2. Kumar P, D’Souza R, Shaddox L, Burne RA, Ebersole J, Graves D, Ryder MI, Chapple I. 2021. Killer Smile: The Link Between the Oral Microbiome and Systemic Disease [Conference presentation]. AAAS Annual Meeting [held virtually]. https://aaas.confex.com/aaas/2021/meetingapp.cgi/Session/27521.
  3. Deo PN, Deshmukh R. 2019. Oral microbiome: Unveiling the fundamentals. J Oral Maxillofac Pathol. 23(1): 122-128. doi:10.4103/jomfp.JOMFP_304_18.
  4. Graves DT, Ding Z, Yang Y. 2020. The impact of diabetes on periodontal diseases. Periodontol 2000. 82(1): 214-224. https://doi.org/10.1111/prd.12318.
  5. Lopez-Oliva I, Paropkari AD, Saraswat S, Serban S, Yonel Z, Sharma P, de Pablo P, Raza K, Filer A, Chapple I, et al. 2018. Dysbiotic subgingival microbial communities in periodontally healthy patients with rheumatoid arthritis. Arthritis Rheumatol. 70(7): 1008-1013. https://doi.org/10.1002/art.40485.
  6. Dioguardi M, Crincoli V, Laino L, Alovisi M, Sovereto D, Mastrangelo F, Lo Russo L, Lo Muzio L. 2020. The Role of Periodontitis and Periodontal Bacteria in the Onset and Progression of Alzheimer’s Disease: A Systematic Review. J Clin Med. 9(2): 495. https://doi.org/10.3390/jcm9020495.
  7. Arigbede AO, Babatope BO, Bamidele MK. 2012. Periodontitis and systemic diseases: A literature review. J Indian Soc Periodontol. 16(4): 487-491. https://doi.org/10.4103/0972-124X.106878.
  8. Curtis MA, Diaz PI, Van Dyke TE. 2020. The role of the microbiota in periodontal disease. Periodontol 2000. 83(1): 14-25. https://doi.org/10.1111/prd.12296.
  9. Ryder MI. 2020. Porphyromonas gingivalis and Alzheimer disease: Recent findings and potential therapies. J Periodontol. 91 Suppl 1: S45-S49. https://doi.org/10.1002/JPER.20-0104.
  10. Cortexyme Inc. 2021. GAIN Trial: Phase 2/3 Study of COR388 in Subjects With Alzheimer’s Disease. ClinicalTrials.Gov. Identifier NCT03823404. https://clinicaltrials.gov/ct2/show/NCT03823404.

The Human-Animal Interface: Exploring the Origin, Present, and Future of COVID-19

By Tammie Tam, Microbiology ‘22

Author’s Note: Since taking the class One Health Fundamentals (PMI 129Y), I have been acutely aware of this One Health idea that the health of humankind is deeply intertwined with the health of animals and our planet. This COVID-19 pandemic has been a perfect model as a One Health issue. Through this article, I hope to introduce readers to a fuller perspective of COVID-19 as a zoonotic disease. 

 

The COVID-19 pandemic has escalated into a human tragedy, measured daily by an increasing number of infection cases and a piling death toll. Yet, to understand the current and future risks of the SARS-CoV-2 virus, one must account for the virus’s relationship with animals in the context of its zoonotic nature, as the transmission between animals and humans is often overlooked. Uncovering the range of intermediary hosts of the virus may provide clues to the virus’s origin, point to potential reservoirs for a mutating virus, and help inform future public health policies. As a result, a small but growing body of researchers is working to predict and confirm potential human-animal transmission models.

The origin of the SARS-CoV-2

Currently, the World Health Organization (WHO) and other disease detectives are still working to unravel the complete origin of the virus. Scientists have narrowed down the primary animal reservoir for the virus through viral genomic analysis, between strains of human and animal coronaviruses [1]. They suspect bats to be the most likely primary source of the virus because the SARS-CoV-2 strain is a 96.2 percent match for a bat coronavirus, bat-nCoV RaTG13 [1]. Despite the close match, the differences in key surface proteins between the two viruses are distinct enough to suggest that the bat coronavirus had to have undergone mutations through one or more intermediary hosts in order to infect humans [2]. 

To identify potential intermediate hosts, scientists are examining coronaviruses specific to different animal species [1]. If SARS-CoV-2 is genetically similar to another animal-specific coronavirus, SARS-CoV-2 may also possess similar viral proteins to the animal-specific coronaviruses. With similar proteins, similar host-virus interactions can theoretically take place, allowing for SARS-CoV-2 to infect the animal in question. For example, besides bats, a pangolin coronavirus, pangolin-nCoV, has the second highest genetic similarity to SARS-CoV-2, which positions the pangolin as a possible intermediate host [3]. Because of the similarity, viral proteins of the pangolin coronavirus can interact with shared key host proteins in humans just as strongly as in pangolin [4]. However, more epidemiological research is needed to determine whether a pangolin had contracted coronavirus from a human or a human had contracted coronavirus from a pangolin. Alternatively, the intermediate host could have been another animal, but there are still no clear leads [1]. 

What it takes to be a host for SARS-CoV-2

In any viable host, the SARS-CoV-2 virus operates by sneaking past immune defenses, forcing its way into cells, and co-opting the cell’s machinery for viral replication [5]. Along the way, the virus may acquire mutations—some deadly and some harmless. Eventually, the virus has propagated in a high enough quantity to jump from its current host to the next [5].

Most importantly for the virus to infect a host properly, the virus must recognize the entranceway into cells quickly enough before the host immune system catches on to the intruder and mounts an attack [5]. SARS-CoV-2’s key into the cell is through its spike glycoproteins found on the outer envelope of the virus. Once the spike glycoproteins interact with an appropriate angiotensin-converting enzyme 2 (ACE2) receptor found on the host cell surfaces, the virus blocks the regular functions of the ACE2 receptor, such as regulating blood pressure and local inflammation [6,7]. At the same time, the interaction triggers the cell to take in the virus [5]. 

Since the gene encoding for the ACE2 receptor is relatively similar among humans, the virus can travel and infect the human population easily. Likewise, most animals closely related to humans like great apes possess a similar ACE2 receptor in terms of structure and function, which allows SARS-CoV-2 a path to hijack the cells of certain non-human animals [8]. Despite the overall similar structure and function, the ACE2 receptor varies between animal species at key interaction sites with the spike glycoproteins due to natural mutations that are kept to make the ACE2 receptor the most efficient in the respective animal. Thus, while there are other proteins involved in viral entry into the host cells, the ACE2 receptor is the one that varies between animals and most likely modulates susceptibility to COVID-19 [9]. 

As a result, scientists are particularly interested in the binding of the ACE2 receptor with the viral spike glycoprotein because of its implications for an organism’s susceptibility to COVID-19. Dr. Xuesen Zhao and their team from Capital Medical University examined the sequence identities and interaction patterns of the binding site between ACE2 receptors of different animals and the spike glycoproteins of the SARS-CoV-2 [10]. They reasoned that the more similar the ACE2 receptor of an animal is to humans, the more likely the virus could infect the animal. For example, they found ACE2 receptors of rhesus monkeys, a closely related primate, had similar interaction patterns as humans [10]. Meanwhile, they found rats and mice to have dissimilar ACE2 receptors and poor viral entry [10].

While entrance into the cell is a major part of infection, there are other factors to also consider, such as the ability for viral replication to subsequently take place [11]. With so many different organisms on the planet, the models simply provide a direction for where to look next. SARS-CoV-2 is unable to replicate efficiently in certain animals despite having the entrance key to get in. For example, the virus is able to replicate well in ferrets and cats, making them susceptible to the virus [12]. In dogs, the virus can only weakly replicate. Meanwhile in pigs, chickens, and ducks, the virus is unable to replicate [12]. Outside of the laboratory, confirmed cases in animals include farm animals such as minks; zoo animals such as gorillas, tigers, lions, and snow leopards; and domestic animals such as cats and dogs [13].

The future for SARS-CoV-2

Due to the multitude of intermediary hosts, COVID-19 is unlikely to disappear for good even if every person is vaccinated [14]. Viral spillover from human to animal can spill back to humans. Often, as the virus travels through a new animal population, the virus population will be subjected to slightly different pressures and selected for mutations that will confer a favorable advantage for virus survival and transmission within the current host population [15]. Sometimes, this could make the virus weaker in humans. However, there are times when the virus becomes more virulent and dangerous to humans if it spills back over from the animal reservoir [15]. Consequently, it is important to understand the full range of hosts in order to put in place preventative measures against viral spillover. 

As of now, most of the known susceptible animals usually do not get severely sick with some known exceptions like minks [1]. Nevertheless, people must take precautions when interacting with animals, since research into this area is still developing and there are many unknown factors involved. This is especially important for endangered species to not become sick, because they already face other threats that make them vulnerable to extinction [8]. As a result, some researchers are taking it into their own hands to keep certain animals safe. For example, after the San Diego Zoo’s resident gorillas contracted COVID-19 in January, the zoo proactively began using the experimental Zoetis vaccine to vaccinate their orangutans and bonobos, which are great apes that are considered closely related to humans and susceptible to COVID-19 [16]. Due to an assumed COVID-19 immunity in the gorillas and a limited supply of the Zoetis vaccines, they decided to not vaccinate the gorillas [16]. Now, scientists are trying to modify the Zoetis vaccine for minks, because minks are very susceptible to severe symptoms from COVID-19 and have shown to be able to transmit the virus back to humans [17]. 

Besides the virus mutating into different variants through basic genetic mutations, people must be cautious of potential new coronaviruses which can infect humans [18]. The human population has encountered other novel coronaviruses over the past several years, so it is not out of the question. In animals, if two coronaviruses of a human and an animal infect the same animal host, it could cause a recombination event and create a new hybrid coronavirus [19]. 

For the SARS-CoV-2 virus, Dr. Maya Wardeh and their team at the University of Liverpool found over 100 possible host species where recombination events could take place [18]. These hosts are animals who can contract two or more coronaviruses with one of them being the SARS-CoV-2 virus. For instance, the lesser Asiatic yellow bat, a well-known host of several coronaviruses, is predicted to be one of these recombination hosts [18]. Also, species closer to home such as the domestic cat is another possible recombination host [18]. While it will take many different rare events, from co-infection to human interaction with the particular animal for recombination to be possible, scientists are on the lookout.

Even without a full picture, the Center for Disease Control (CDC) understands the potential risks of animal reservoirs and advises COVID-19-infected patients to stay away from animals—wildlife or domestic—to prevent spillover [20]. COVID-19 has also brought to light zoonotic disease risks from illegal animal trades and wet markets. Once research into the human-animal transmission model becomes more well-developed, public health officials will have a clearer picture as to how the pandemic spiraled to its current state and help develop policies to prevent it from happening again. 

 

References: 

  1. Zhao J, Cui W, Tian BP. 2020. The Potential Intermediate Hosts for SARS-CoV-2. Frontiers in Microbiology 11 (September): 580137. https://doi.org/10.3389/fmicb.2020.580137
  2. Friend T, Stebbing J. 2021. What Is the Intermediate Host Species of SARS-CoV-2? Future Virology 16 (3): 153–56. https://doi.org/10.2217/fvl-2020-0390.
  3. Lam TT, Jia N,  Zhang YW, Shum MH, Jiang JF,  Zhu HC,  Tong YG, et al. 2020. Identifying SARS-CoV-2-Related Coronaviruses in Malayan Pangolins. Nature 583 (7815): 282–85. https://doi.org/10.1038/s41586-020-2169-0
  4. Wrobel AG, Benton DJ, Xu P, Calder LJ, Borg A, Roustan C, Martin SR, Rosenthal PB, Skehel JJ, Gamblin SJ. 2021. Structure and Binding Properties of Pangolin-CoV Spike Glycoprotein Inform the Evolution of SARS-CoV-2. Nature Communications 12 (1): 837. https://doi.org/10.1038/s41467-021-21006-9
  5. Harrison AG, Lin T, Wang P. 2020. Mechanisms of SARS-CoV-2 Transmission and Pathogenesis. Trends in Immunology 41 (12): 1100–1115. https://doi.org/10.1016/j.it.2020.10.004
  6. Hamming I, Cooper ME, Haagmans BL, Hooper NM,Korstanje R, Osterhaus  ADME, Timens  W, Turner  AJ, Navis G, van Goor H. 2007. The Emerging Role of ACE2 in Physiology and Disease. The Journal of Pathology 212 (1): 1–11. https://doi.org/10.1002/path.2162.
  7. Sriram K, Insel PA. 2020. A Hypothesis for Pathobiology and Treatment of COVID‐19 : The Centrality of ACE1 / ACE2 Imbalance. British Journal of Pharmacology 177 (21): 4825–44. https://doi.org/10.1111/bph.15082
  8. Melin AD, Janiak MC, Marrone F, Arora PS, Higham JP. 2020. Comparative ACE2 Variation and Primate COVID-19 Risk. Communications Biology 3 (1): 641. https://doi.org/10.1038/s42003-020-01370-w
  9. Brooke GN, Prischi F. 2020. Structural and Functional Modelling of SARS-CoV-2 Entry in Animal Models. Scientific Reports 10 (1): 15917. https://doi.org/10.1038/s41598-020-72528-z.
  10. Zhao X, Chen D, Szabla R, Zheng M, Li G, Du P, Zheng S, et al. 2020. Broad and Differential Animal Angiotensin-Converting Enzyme 2 Receptor Usage by SARS-CoV-2. Journal of Virology 94 (18). https://doi.org/10.1128/JVI.00940-20.
  11. Manjarrez-Zavala MA, Rosete-Olvera DP, Gutiérrez-González LH, Ocadiz-Delgado R, Cabello-Gutiérrez C. 2013. Pathogenesis of Viral Respiratory Infection. IntechOpen. https://doi.org/10.5772/54287
  12. Shi J, Wen Z, Zhong G, Yang H, Wang C, Huang B, Liu R, et al. 2020. Susceptibility of Ferrets, Cats, Dogs, and Other Domesticated Animals to SARS–Coronavirus 2. Science 368 (6494): 1016–20. https://doi.org/10.1126/science.abb7015.
  13. Quammen D. And Then the Gorillas Started Coughing. The New York Times. Accessed February 19, 2021. Available from: https://www.nytimes.com/2021/02/19/opinion/covid-symptoms-gorillas.html
  14. Phillips N. 2021. The Coronavirus Is Here to Stay — Here’s What That Means. Nature 590 (7846): 382–84. https://doi.org/10.1038/d41586-021-00396-2
  15. Geoghegan JL, Holmes EC. 2018. The Phylogenomics of Evolving Virus Virulence. Nature Reviews Genetics 19 (12): 756–69. https://doi.org/10.1038/s41576-018-0055-5
  16. Chan S, Andrew S. 2021. Great Apes at the San Diego Zoo Receive a Covid-19 Vaccine for Animals. CNN. Accessed March 5, 2021. Available from: https://www.cnn.com/2021/03/05/us/great-apes-coronavirus-vaccine-san-diego-zoo-trnd/index.html
  17. Greenfield P. 2021. Covid Vaccine Used on Apes at San Diego Zoo Trialled on Mink. The Guardian.Accessed March 23, 2021. Available from: http://www.theguardian.com/environment/2021/mar/23/covid-vaccine-used-great-apes-san-diego-zoo-trialled-mink
  18. Wardeh M, Baylis M, Blagrove MSC. 2021. Predicting Mammalian Hosts in Which Novel Coronaviruses Can Be Generated. Nature Communications 12 (1): 780. https://doi.org/10.1038/s41467-021-21034-5.
  19. Pérez-Losada M, Arenas M, Galán JC, Palero F, González-Candelas F. 2015. Recombination in Viruses: Mechanisms, Methods of Study, and Evolutionary Consequences. Infection, Genetics and Evolution 30 (March): 296–307. https://doi.org/10.1016/j.meegid.2014.12.022.
  20. Centers for Disease Control and Prevention. 2020. COVID-19 and Your Health. Accessed February 11, 2020. Available from: https://www.cdc.gov/coronavirus/2019-ncov/daily-life-coping/animals.html.

Human Cryopreservation: An Opportunity for Rejuvenation

By Barry Nguyen, Biochemistry & Molecular Biology ‘23

Author’s Note: I became interested in ways to bypass built-in lifespans after taking HDE 117, a longevity class with Dr. James Carey. During the course of the class, I was exposed to many different ways to extend the human lifespan. However, I was most interested in cryogenics and its prospects of human rejuvenation, prompting me to explore the possibilities of human cryopreservation. 

 

Summary

This paper is focused on exploring the prospects of human cryopreservation. The first section discusses cryogenics and its relevance in the discussion of human cryopreservation. The following section utilizes empirical modeling to support the relationship between temperature and reaction rates. Next, the paper discusses the cryopreservation procedure itself and explores how the definition of death can be reimagined. We then transition to discussing cryopreservation’s possibility for rejuvenation. Specifically, we redefine the definition of aging itself and discuss aging phenomena on the molecular scale and use both of these as a basis for the discussion of immortality. The succeeding section is concerned with the limitations of human cryopreservation. Finally, the paper concludes with a brief discussion of the possible future of cryogenic technology. 

Cryogenics

Cryogenics is a field of study focused on material behaviors at very low temperatures, ranging from -150°C to -273°C. At these extremely low temperatures, chemical properties are altered, and molecular interactions are halted [1]. By halted, it is not correct to say that all molecular interactions have been stopped. Rather, the molecular interactions have come as close to theoretically possible to ceasing and are at the lowest possible energy state. At these temperatures, chemical properties are also altered and unique phenomena emerge, allowing for extensive applications, most notably human cryopreservation. Because heat is related to the motion of particles, at these temperatures the biochemical activities within living systems are effectively reduced [9]. The prospects for preserving an individual at extremely cold temperatures have been increasing throughout the years as research within the field continues to develop. As of now, human cryopreservation seems more of a speculation than reality. Freezing an individual is one thing, but there is no guarantee that the individual will wake up from such an extensive period of suspension. Although extremely low temperatures serve as an appropriate basis for human cryopreservation, many more factors must be considered to avoid consequences that may occur during the procedure and after revival.

Empirical Modeling

The rates of biochemical processes at extremely low temperatures can be modeled mathematically [4]. The Arrhenius equation, proposed by Arrhenius in 1889, establishes a relationship between temperature and reaction rates. In figure 1, K is the reaction rate, Ea is the activation energy, A is the frequency factor (related to the orientations of molecules necessary to produce a favorable reaction), R is the universal gas constant, and T is the temperature. Manipulating the equation, we produce a form that directly shows the relationship between the reaction rate and temperature, as depicted in Figure 2. We will use the enzyme lactate dehydrogenase to illustrate the relationship between K and T [4]. With its activation energy defined as 54, 810 J/mol, we can explore the enzyme’s reaction rate at a 10°C difference. With T1 and T2 at 40°C and 30°C respectively, we get a reaction rate ratio of 2.004. This tells us that a 10°C difference is enough to cut the reaction rate of the enzyme exactly in half.  

The relationship between reaction rates and temperature, as expressed by the Arrhenius equation, lends weight to the viability of cryopreservation. If a 10°C difference is enough to cut a reaction rate in half, imagine how much the reaction rate would be reduced within cryonically preserved individuals at extremely low temperatures. Furthermore, the biochemical processes that are occurring in the body at these levels are paused—not in the sense of being physically stopped, but rather the time needed for the processes to go to completion is relatively infinite. 

Figure 1. The Arrhenius equation Figure 2. Manipulation of the Arrhenius equation to compare reaction rates at two different temperatures

 

Human Cryopreservation

By understanding that at these extremely low temperatures, biochemical reaction rates are suppressed, the practice of cryogenically preserving a whole individual became a reality [14]. For this process to begin, the individual must be induced in the death state. Once an individual enters the initial stages of death, the human body initiates its decomposition phase. The body’s cell walls begin to break down and in turn, release digestive enzymes that process the tissues in the body [11]. Because the body begins to break down at such a rapid pace, it is imperative that the patient, once induced in the death stage, be worked on immediately.

 The process of chilling the human body to extremely low temperatures is a delicate and slow process and is very important in the initial steps of the cryopreservation procedure. Once the patient arrives in the death state, the circulation and respiration of the cryonic subject is restored and they are ready to be cooled [4]. First, the subject’s blood is replaced with 10% cryoprotectants to prevent ice formation. A small percentage of cryoprotectants are added initially to avoid an elevated osmotic shrinking response. Once the intracellular and extracellular cryoprotectant volume reaches equilibrium, the cells are ready for cooling which is done at a very slow pace (1°C/min) [5].

The cryoprotectant used typically consists of nutritional salts, buffers, osmogens, and apoptosis inhibitors, ingredients necessary in the maintenance of isotonic concentrations of the cell [5]. In doing so, cells within the human body can avoid swelling and shrinking. Additionally, another key formulation of cryoprotectant mixture is non-penetrating cryoprotectants which are typically large molecular polymers. These play a large part in the inhibition of ice growth and prevention of injury due to being subjected to the extreme cold [5]. 

To understand the prospects of human cryopreservation, it is helpful to redirect ourselves back to the definition of death. In 1988, the scientific community reviewed and redefined the definition of death from being in cardio-respiratory arrest to brain death [8]. In cryonically preserved patients, the extremely cold temperatures are thought to preserve the neural structures, which store long-term memory and the identity of the person. In this way, utilizing extremely low temperatures to preserve neural structures and prevent them from being compromised is a prospect worth noting. Individuals who are cryonically preserved should not be viewed as being dead or alive, but rather be viewed as being temporarily suspended in time [8]. The normal cycles of biological processes such as growth and decay are paused, providing an opportunity for resuscitation and reanimation in the future [10]. To give a new perspective, cryopreservation can be viewed similarly to frozen embryos: just as embryos preserved in extremely cold temperatures gain life once implanted in a uterus, the cryopreserved patient may reenter the living state through the process of human reanimation. 

Prospects for Immortality

The process of human cryopreservation aims to allow individuals to escape imminent death by first being induced into a transient death state [8]. Essentially, individuals are given the opportunity to bypass human mortality. Dr. James Hiram Bedford, a former psychology professor at UC Berkeley had his life threatened by renal cancer. He decided to undergo the cryopreservation process and became the first human to be cryonically preserved in 1967 [13]. By agreeing to enter this process, he hoped that, in the future, technology would be advanced enough to revive him and cure his illness. Ever since interests in cryopreservation have increased substantially, and as of 2014, about 250 corpses have been cryogenically preserved in the US [13].

Shifting Views on Aging

Aging is a degradative process that entails a whole array of pathologies. If we were to view aging as a disease itself that can be treated, cryopreservation opens a wide range of possibilities. Specifically, the process of cryopreservation allows an individual to avoid the effects of aging pathologies by having the opportunity to be treated once technology has advanced enough. This provides hope to bypass the mechanically built-in lifespans of humans, and essentially, provides prospects for immortality.

On a larger scale, as we age, the probability of dying increases significantly [7]. To put it simply, as we age, there are more health factors in place to compete for our lives and the chance of survival through older ages decreases. In such cases, aging can be correlated with functional decline. Similarly, on the molecular scale, aging can be seen as a direct consequence of telomere shortening [6]. Telomeres are nucleoprotein structures that exist at the ends of chromosomes and are essential to the integrity of our DNA. During the process of DNA replication, telomeres protect the ends of chromosomes and prevent loss of genetic information [16]. However, as we age, and as our body continues to undergo DNA replication, the telomeres shorten leading to the joining of ends of various chromosomes, pathological cell division, genomic instability and apoptosis. 

In short, the health consequences that come with aging are inevitable but human cryopreservation can be seen to offset these inevitable aging phenomena. The process allows an individual who is suffering from a presently incurable disease to be temporarily frozen in time. In this way, they may be revived when society is advanced enough to deal with the disease successfully. In essence, the human cryopreservation process can be seen to bypass inevitable health consequences, providing rejuvenating possibilities for any individual.

Technological Limitations 

Although successfully preserving an individual through extreme temperatures is certainly an exciting prospect, little evidence exists to indicate that successful preservation and remanimation is possible [15]. At present, there are many challenges that need to be overcome to even support the viability of such an extensive process. According to Professor Armitage, the director of tissue banking at the University of Bristol, preserving the whole human body is an entirely new challenge [15]. Society is not even at the stage of cryopreserving organs. Organs, alone, are very complex, containing different types of cells and blood vessels that all need to be preserved. Similarly, Barry Fuller, another professor at the University of College London, has stated that before exploring the prospects of human cryopreservation, society must be able to demonstrate that human organs can be cryopreserved for transplantation [15]. Hence, as of current, there is close to zero evidence that a whole human body can survive cryopreservation. 

In the previous section, we discussed the arrhenius equation which derived the relationship between temperature and metabolic rates. However, the equation itself does not explore the consequences of raising the temperature of the human body during reanimation. While thawing, the frozen tissues and cells can experience physical disruptions which can damage them [3]. To a greater extent, an individual’s epigenetic markers can even be affected, causing epigenetic reprogramming, which can change the expression of certain genes. However, the biggest hurdle is the successful preservation of the brain. The human brain is arguably one of the most important organs in the body, and cryopreservation must be successful in preserving the integrity of the neural structures. Prospects of successfully cryopreserving whole human brains are slim due to minimal research. Moreover, experiments with frozen whole animals’ brains have not been reported since the 1970s [3]. Obviously, research on this matter is severely limited.

Discussion 

Despite the overwhelming uncertainties surrounding human cryopreservation and society’s current limits, the prospects of being able to defy death or possibly avoiding it in the future are becoming a topic of increasing interest. When an individual is brought to the brink of death, the uncertainties around the cryopreservation procedure, specifically its unproven track record of success, seem inconsequential in the long run. If society were to overlook the field of preservation based purely on unsubstantiated results and the unlikelihood of success, advancements would never occur. All in all, the increase in technological advancements and research within cryogenics is making the prospects of reviving a frozen individual in the future ever so likely. 

 

References:

  1. Britannica, T. Editors of Encyclopaedia. “Cryogenics.” Encyclopedia Britannica, May 26, 2017. https://www.britannica.com/science/cryogenics.
  2. “What Is Cryogenics? “Gaslab.com. Accessed May 2, 2021. https://gaslab.com/blogs/articles/what-is-cryogenics
  3. Stolzing, Alexandra . “Will We Ever Be Able to Bring Cryogenically Frozen Corpses Back to Life? A Cryobiologist Explains.” The Conversation, March 26, 2019. https://theconversation.com/will-we-ever-be-able-to-bring-cryogenically-frozen-corpses-back-to-life-a-cryobiologist-explains-69500.
  4. Best, Benjamin P. “Scientific Justification of Cryonics Practice.” Rejuvenation Research 11, no. 2 (2008): 493–503. https://doi.org/10.1089/rej.2008.0661. 
  5. Bhattacharya, Sankha. “Cryoprotectants and Their Usage in Cryopreservation Process.” Cryopreservation Biotechnology in Biomedical and Biological Sciences, 2018. https://doi.org/10.5772/intechopen.80477.
  6. Blasco, M. A. “Telomere length, Stem Cells and Aging.” Nature Chemical Biology, 3, no.10 (September 2007): 640–649. doi:10.1038/nchembio.2007.38
  7. Carey, J.R. 2020, June 13. Limits of morbidity compression. Longevity (HDE/ENT 117) lecture notes, UC Davis.
  8. Cohen, C. “Bioethicists Must Rethink the Concept of Death: the Idea of Brain Death Is Not Appropriate for Cryopreservation.” Clinics 67, no. 2 (2012): 93–94. https://doi.org/10.6061/clinics/2012(02)01. 
  9. Jang, Tae Hoon, Sung Choel Park, Ji Hyun Yang, Jung Yoon Kim, Jae Hong Seok, Ui Seo Park, Chang Won Choi, Sung Ryul Lee, and Jin Han. “Cryopreservation and Its Clinical Applications.” Integrative Medicine Research 6, no. 1 (2017): 12–18. https://doi.org/10.1016/j.imr.2016.12.001. 
  10. Lemke, Thomas.“Beyond Life and Death. Investigating Cryopreservation Practices in Contemporary Societies,”  Soziologie, 48. No. 4 (April 2019):450-466.
  11. Lorraine. “The Stages of Human Decomposition.” Georgia Clean Services.” Georgia Clean, April 6, 2020. https://www.georgiaclean.com/the-stages-of-human-decomposition/.
  12. Luke Davis. “The Difference between Cryonics and Cryogenics,” August 10, 2020. https://logicface.co.uk/difference-between-cryonics-and-cryogenics/.
  13. Moen, Ole Martin. “The Case for Cryonics.” Journal of Medical Ethics 41, no. 8 (2015): 677–81. https://doi.org/10.1136/medethics-2015-102715. 
  14. Purtill, Corinne. “Fifty Years Frozen: The World’s First Cryonically Preserved Human’s Disturbing Journey to Immortality.” Quartz. Quartz. Accessed May 2, 2021. https://qz.com/883524/fifty-years-frozen-the-worlds-first-cryonically-preserved-humans-disturbing-journey-to-immortality/.
  15. Roxby, Philippa. “What Does Cryopreservation Do to Human Bodies?” BBC News. BBC, November 18, 2016. https://www.bbc.com/news/health-38019392.
  16. Trybek, Tomasz, Artur Kowalik, Stanisław Góźdź, and Aldona Kowalska. “Telomeres and Telomerase in Oncogenesis (Review).” Oncology Letters 20, no. 2 (2020): 1015–27. https://doi.org/10.3892/ol.2020.11659.