Home » Biology (Page 7)

Category Archives: Biology

Want to Get Involved In Research?

[su_heading size="15" margin="0"]The BioInnovation Group is an undergraduate-run research organization aimed at increasing undergraduate access to research opportunities. We have many programs ranging from research project teams to skills training (BIG-RT) and Journal Club.

If you are an undergraduate interested in gaining research experience and skills training, check out our website (https://bigucd.com/) to see what programs and opportunities we have to offer. In order to stay up to date on our events and offerings, you can sign up for our newsletter. We look forward to having you join us![/su_heading]

Newest Posts

Limitations and Advancements of Diagnostics and Treatment Options for Ovarian Cancer

By Mari Hoffman, Genetics & Genomics ‘21

Author’s note: I wrote this literature review for an assignment in UWP104E, Writing in Science. I chose this topic because my mom recently got diagnosed with ovarian cancer and I wanted to use this opportunity to learn more about the literature surrounding ovarian cancer and more specifically the latest research in diagnostics and treatments of ovarian cancer. 

 

Introduction

Eighty percent of ovarian cancers are diagnosed at a late stage of either three or four [1,2]. This is due to the nature of the disease to appear asymptomatic in many women, preventing early diagnosis [3,4].  Early diagnosis is also difficult to obtain due to the lack of biomarkers with high sensitivity and effective diagnostic testing for early disease detection [2,4]. New diagnostics markers and techniques are being developed such as the use of circRNAs, which are single-stranded circular mRNA transcripts, [3] and novel biomarkers such as ROMA and RMI [4]. The common late state diagnosis of the disease contributes to the fact that ovarian cancer also has the highest mortality rate among gynecological cancers [2,4,5]. Ovarian cancer represents the 7th most common cancer [2] and women in the United States have an average lifetime risk of 1.3% of developing ovarian cancer [1].  Ovarian cancer encompasses a larger group of malignancies that vary in origin, grade, and histology [1,2,6]. Although for the purpose of this review, when discussing ovarian cancer, I am referring to the most common type which is high-grade serous ovarian cancer [1,2,6,7].

Initial treatment of Ovarian cancer typically starts with either cytoreductive surgery or platinum-based chemotherapy, depending on the degree of the cancer spread. The combination of the two treatments, regardless of the order, has been the standard of care (SOC) for ovarian cancer [2,7]. Although most patients respond initially to this SOC treatment, the rate of recurrence of ovarian cancer is around 70 percent [2,5,7]. Due to the high rate of recurrence, the advancement of novel and targeted treatment is needed. It is important to consider the disease’s genetic features and molecular profile to understand the targeted therapies that are available to those with ovarian cancer. Some of the most promising targeted treatments have been with the use of PARP inhibitors [2,5,7]. Other targeted therapies are under exploration to be used in conjunction with chemotherapy such as Bevacizumab [2,7]. This review will examine and evaluate the current options and limitations of diagnostic screening and explore secondary treatments of ovarian cancer.

Diagnostics of Ovarian Cancer

Overview and Limitations of Current Diagnostic Techniques

Diagnostic testing and screening play an extremely important role in ovarian cancer prognostics because the earlier the cancer is found, the higher the likelihood of the patient having a positive outcome [4,8]. Cancer prognostics predict the likely course of a disease. Unfortunately, there are no highly effective biomarkers that are used preventively in the diagnostics of ovarian cancer [2,4,8,9].  The most common method for the detection of ovarian cancer is the use of transvaginal ultrasonography and cancer antigen 125 (CA125) level measurement  [1,3,4].  CA125 is a membrane glycoprotein antigen that has elevated expression levels in ovarian cancer [2,4]. CA125 is not very effective for early detection of ovarian cancer, such as stage 1, as the sensitivity of CA125 is reported to be under 50% [8]. CA125 sensitivity was found to be around 80-90% in the advanced stage of the disease [4]. Although CA125 has downfalls as a diagnostic marker, it is also used as a prognostic marker to assess response to treatment and recurrence of the disease [2,6]. Transvaginal ultrasonography has been shown to be a more useful diagnostic technique overall when used in combination with CA125 testing due to the high rate of false positives [4,10]. The false positives come from the low specificity of the ultrasound to be able to detect malignant versus benign abnormalities. Also, transvaginal ultrasonography is not always able to detect a tumor prior to metastasis [10]. 

Since there is no single biomarker that is highly effective in detecting ovarian cancer, detection markers have been combined and used algorithmically for higher sensitivity and specificity. It is important to understand that these tests are not true diagnostic tests, but are used to determine the likelihood of malignancy and the extent of the spread of the disease [8,9]. Some examples of algorithms being used for diagnostic testing are the Risk of Ovarian Malignancy Algorithm (ROMA) [9] and Risk of Malignancy Index (RMI) [8]. RMI is a mathematical formula that incorporates information from CA125 levels, ultrasounds findings, and menopausal status to quantify a score that is used to evaluate the risk of malignancy [8].  ROMA is a new algorithm that was proposed in 2009 that combines the use of human epididymis protein 4 (HE4) and CA125 levels [8,9]. HE4 is overexpressed in serous ovarian cancer with a sensitivity of around 76.5% when it is combined with CA125 [4,8]. The ROMA algorithm was a significant improvement in initial diagnostic testing compared to single biomarker tests because of the increase in sensitivity, but is typically not used until a symptom or sign of ovarian cancer is present [8,9]. Overall, there is currently a combination of biomarkers that are used for diagnostic and prognostics of ovarian cancer yet there is a need for higher specificity and sensitivity of these tests. With more sensitive biomarkers, diagnosis of ovarian cancer before the presence of the disease would yield a much better prognosis for patients.

Novel Diagnostic Testing

Most of the diagnostic testing described above are protein-based markers. More recent research explores the use of nucleic acids as novel serum markers, which have the potential to yield higher specificity and sensitivity than protein-based biomarkers [9]. One specific kind of nucleic acid that is being explored for the use of diagnostics in ovarian cancer is microRNA (miRNA), a noncoding and single-stranded molecule that ranges from 20-22 nucleotides in length. They are involved in the post-transcriptional regulation of genes which impact a variety of factors relating to cell development, differentiation, and growth. [11,12]. In ovarian cancer, it has been found that miRNAs can play many different roles. Elevated expression levels of certain miRNAs have been associated with tumor progression and are a potential option as a noninvasive diagnostic biomarker [13,14]. Panels of miRNAs have been shown to discriminate between patients with malignant versus benign ovarian disease. A smaller subgroup of these miRNAs can even be used in association with lymph node metastasis and identification of stage three or four presentations of the disease. There is also a correlation between upregulation of certain miRNAs and serum CA125 values [13]. There is a diagnostic value of using miRNAs for the detection of ovarian cancer and is a novel area of research that brings hope to diagnostics of ovarian cancer. 

 Another novel diagnostic biomarker that is being investigated is the use of circRNAS. CircRNAs are single-stranded circular mRNA transcripts that are a part of the family of long noncoding RNAs. They were originally thought to be functionless byproducts of RNA splicing until they were discovered to play a regulatory role in the progression of tumors found in cancers. In ovarian cancer, circRNAs act primarily as a miRNA sponge, binding to miRNA and therefore inhibiting it from carrying out its desired function. This function contributes to and regulates ovarian cancer cell proliferation and invasion. The elevation of circRNAs in the sera of patients with ovarian cancer shows the potential of circRNA to be an effective diagnostic marker due to its high abundance and stability in ovarian cancer tissues. CircRNA is more stable than linear RNA because it is covalently closed, not containing a  5’ cap or poly-A tail [3,15]. Further research is needed to be done to fully understand the role of circRNAs in ovarian cancer, but the current research shows the great potential of circRNAs to be used as a diagnostic marker. 

Current Treatment of Ovarian Cancer

Standard of Care and Limitations

The standard of care (SOC) in the medical world is the treatment process that a physician would advise based on the current knowledge and research surrounding the disease.  The treatment of ovarian cancer is personalized to the patient depending on the extent of the spread of the disease. There are defined SOC protocols for each of the varying stages of ovarian cancer. If ovarian cancer is found in the early and localized stage, the SOC is to do surgery in an attempt to eliminate the cancer cells present in the area. Unfortunately, ovarian cancer is most commonly diagnosed in the advanced stage. The SOC of advanced-stage ovarian cancer is to conduct surgery if possible and platinum-based chemotherapy. The guidance of whether to start with surgery or chemotherapy is based on whether there is evidence of metastatic disease or not. If there is no evidence of metastatic disease, it is recommended to first undergo surgery. Surgery is primarily used when the cancer is limited to the abdomen area and is localized.  However, if the disease has metastasized then typically chemotherapy is the first line of defense [2,7,16]. The ability of surgery to reduce the likelihood of residual disease when the cancer has spread remains controversial. The most important factor in surgery having a positive impact is the extent that the disease has spread. It is found that if patients have a low or moderate disease preoperatively, compared to extensive, then the approach to start with surgery remains important [17].

Platinum-Based Therapy

A hallmark of cancer is that cells are able to divide rapidly and evade normal cell death. Chemotherapy is a treatment that uses drugs that kill rapidly dividing cells. The use of platinum-based carboplatin and taxol to treat advanced ovarian cancer is the standard [16,17]. This standard chemotherapy consists of intravenous carboplatin and taxol every three weeks. Another treatment option that is being explored is a dose-dense therapy regimen. This type of therapy gives the same dose of intravenous chemotherapy compared to standard chemotherapy, but with less time in between the intervals. The idea is that highly aggressive cancers, like ovarian cancers, could have less regrowth if there are shorter cycles in between treatments. Although there is support in real-world data that dose-dense therapy improves overall survival, randomized trials found no significant difference [18,2]. 

Ifosfamide-based Therapy

Ifosfamide-based regimens are also used in current treatments of ovarian cancer. Ifosfamide is in the nitrogen mustard family of medications which was the first chemotherapy drug. Patients can be chemotherapy-sensitive or chemotherapy-resistant. Having sensitivity towards a chemotherapy treatment means that the patient’s cancer is responsive to the chemotherapy treatment.  It has been demonstrated that patients are more sensitive to ifosfamide-based regimens as opposed to platinum-based therapies. Although there is higher sensitivity to the treatment, ifosfamide-based therapy has a higher toxicity effect than platinum-based therapies. The patient’s response to platinum-based chemotherapy is typically assessed first and ifosfamide therapy can be used if there is no response to platinum-based chemotherapy [16,20].

The prognosis of patients with advanced ovarian cancer even with the SOC remains poor. As mentioned previously, the late-stage diagnosis of the disease and high recurrence rate leaves therapeutic strategies to treat ovarian cancer a challenge [19].

Novel Targeted Therapies

Currently, targeted treatments are used as second-line therapy for ovarian cancer. This is due to the recent and novel use of targeted therapies in the treatment of ovarian cancer. Targeted agents have only been used in clinical trials relatively recently compared to the SOC treatments of ovarian cancer. Due to this reason, and the aggressive nature of ovarian cancer, targeted therapies are typically only used as a secondary treatment [2, 7,16]. There is recent research indicating the potential use of targeted therapies as a first-line treatment, but more research is needed at this time in order to establish targeted therapies as a new standard of care [5,7].

PARP inhibitors

BRCA ½ mutations are present in about 18% of high-grade serous ovarian cancer and guide what treatment options are available. The BRCA1 and BRCA2 genes are tumor suppressor genes and are involved in the repair of DNA double-stranded breaks (DSB) through the use of the error-free Homologous Recombination (HR) pathway. If an individual has a mutation in BRCA ½ then the error-prone nonhomologous end-joining (NHEJ) pathway is used to repair the DSBs and can result in the formation of cancer cells. PARP inhibition is a treatment option that is targeted for individuals harboring the BRCA ½ mutation. Poly(ADP-ribose)polymerase (PARP) are enzymes that function to repair single-stranded breaks (SSBs) in DNA through the use of the Base Excision Repair (BER) pathway. When PARP is inhibited, the repair of the SSBs in DNA can not be completed which results in the formation of DSBs at the replication fork. This formation is toxic to the cell and when not resolved, it causes cell death. It has been found that when a patient has a BRCA mutation or PARP inhibition independently, there are no detrimental effects to the cells. It is when there is a combination of a BRCA mutation and PARP inhibition, that there is no recovery of the cell and the cell demonstrates a synthetic lethality phenotype [5,7].

The first PARP inhibitor to be approved for patients with germline BRCA1/2 mutations with high-grade ovarian cancer was Olaparib. It has been approved for patients who have undergone three or more first-line treatment of chemotherapy and for patients who are undergoing maintenance therapy after having a complete or partial response to platinum chemotherapy. Patients who are using Olaparib as maintenance therapy do not have to have BRCA1/2 mutation status, although there is a higher objective response rate in those who are BRCA1/2 mutated. The proportion of patients who have had a cancer-free event when using Olaparib twice daily compared to a placebo show a significant increase in overall survival. PARP inhibitors open up a new paradigm of treatments for patients with ovarian cancer and are currently undergoing clinical trials as a first-line treatment [5,7].

Bevacizumab

Bevacizumab, also known as Avastin, is a monoclonal antibody that has been used as a treatment for ovarian cancer in conjunction with carboplatin and taxol chemotherapy. It is also an anti-angiogenesis agent that acts to prevent carcinogenesis by inhibiting the creation of new blood vessels for the cancer. Randomized trials have shown improved progression-free survival rates with the addition of Bevacizumab to the standard chemotherapy. Although findings support that there is no overall survival rate increase with the addition of Bevacizumab [21], there are also additional toxicities from Bevacizumab such as hypertension and delayed wound healing [7]. These are all important factors to consider when evaluating treatment options.  

Conclusion

Ovarian cancer today remains the deadliest disease among gynecological malignancies [2,4,5].  This review highlights the critical importance for researchers to create more effective diagnostics for earlier detection of the disease and cancer in general. The future of diagnostics is moving towards personalized testing and screenings of multiple types of cancers based on unique genetic profiles. While there are many novel targeted therapies being explored, it is imperative that the research on treatments of the disease continue to evolve and improve. Although the diagnosis of ovarian cancer remains serious, there is hope that the use of novel diagnostic techniques and targeted therapies such as PARP inhibitors and Bevacizumab, improves the outcomes of patients with the disease. 

 

References

  1. Torre, L.A., Trabert, L., DeSantis, C. E., Miller, K. D., Samimi, G., Runowicz, C. D., Gaudet, M. M., Jemal, A., Siegel, R. L. (2018). Ovarian Cancer Statistics, 2018. CA: A Cancer Journal for Clinicians, 68:284-296.
  2. Lisio, M., Fu, L., Goyeneche, A., Gao, Z., Telleria, C. (2019). High-Grade Serous Ovarian Cancer: Basic Sciences, Clinical and Therapeutic Standpoints. International Journal of Molecular Sciences, 20, 952.
  3. Sheng, R., Li, X., Wang, Z., Wang, X. (2020). Circular RNAs and their emerging roles as diagnostic and prognostic biomarkers in ovarian cancer. Cancer Letters, 473:139-147.
  4. Muinai, T. Boruah, H. P. D., Pal, M. (2018). Diagnostic and Prognostic Biomarkers in ovarian cancer and the potential roles of cancer stem cells- An updated review. Experimental Cell Research, 362:1-10.
  5. Taylor, K. N., Eskander, R. N. (2018). Parp Inhibitors in Epithelial Ovarian Cancer. Recent Patents on Anticancer Drug Discovery. Vol. 13, No. 2.
  6. Cobb, L.P., Gaillard, S., Wang, Y., Shih, L., Secord, A.A. (2015). Adenocarcinoma of Mullerian Origin: review of pathogenesis, molecular biology, and emerging treatment paradigms. Gynecologic Oncology Research and Practice, 2:1.
  7. Lheureux, S., Braunstein, M., & Oza, A. M. (2019). Epithelial ovarian cancer: evolution of management in the era of precision medicine. CA: a cancer journal for clinicians, 69(4), 280-304.
  8. Dochez, V., Cailon, H., Vaucel, E., Dimet, J., Winer, N., Ducarme, G. (2019). Biomarkers and algorithms for diagnosis of ovarian cancer: CA125, HE4, RMI and ROMA, a review. Journal of Ovarian Research, 12:28.
  9. Ueland, F. R. (2017). A perspective on ovarian cancer biomarkers: past, present and yet-to-come. Diagnostics, 7(1), 14.
  10. Kamal, R., Hamed, S., Mansour, S., Mounir, Y., & Abdel Sallam, S. (2018). Ovarian cancer screening—ultrasound; impact on ovarian cancer mortality. The British journal of radiology, 91(1090), 20170571.
  11. Resnick, K. E., Alder, H., Hagan, J. P., Richardson, D. L., Croce, C. M., & Cohn, D. E. (2009). The detection of differentially expressed microRNAs from the serum of ovarian cancer patients using a novel real-time PCR platform. Gynecologic oncology, 112(1), 55-59.
  12. Chen, S. N., Chang, R., Lin, L. T., Chern, C. U., Tsai, H. W., Wen, Z. H., … & Tsui, K. H. (2019). MicroRNA in ovarian cancer: biology, pathogenesis, and therapeutic opportunities. International journal of environmental research and public health, 16(9), 1510.
  13. Meng, X., Müller, V., Milde-Langosch, K., Trillsch, F., Pantel, K., & Schwarzenbach, H. (2016). Diagnostic and prognostic relevance of circulating exosomal miR-373, miR-200a, miR-200b and miR-200c in patients with epithelial ovarian cancer. Oncotarget, 7(13), 16923.
  14. Wang, W., Yin, Y., Shan, X., Zhou, X., Liu, P., Cao, Q., … & Zhu, W. (2019). The value of plasma-based microRNAs as diagnostic biomarkers for ovarian cancer. The American journal of the medical sciences, 358(4), 256-267.
  15. Pei, C., Wang, H., Shi, C., Zhang, C., & Wang, M. (2020). CircRNA hsa_circ_0013958 may contribute to the development of ovarian cancer by affecting epithelial‐mesenchymal transition and apoptotic signaling pathways. Journal of clinical laboratory analysis, 34(7), e23292.
  16. Penson, Richard T et al. (2019). “Clinical features, diagnosis, staging, and treatment of uterine carcinosarcoma.” Wolters Kluwer. 
  17. Horowitz, N. S., Miller, A., Bunja Rungruang, S. D. R., Rodriguez, N., Bookman, M. A., Hamilton, C. A., … & Maxwell, G. L. (2015). Does aggressive surgery improve outcomes? Interaction between preoperative disease burden and complex surgery in patients with advanced-stage ovarian cancer: an analysis of GOG 182. Journal of Clinical Oncology, 33(8), 937.
  18. Kessous, R., Matanes, E., Laskov, I., Wainstock, T., Abitbol, J., Yasmeen, A., … & Gotlieb, W. H. (2020). Carboplatin plus paclitaxel weekly dose‐dense chemotherapy for high‐grade ovarian cancer: A re‐evaluation. Acta Obstetricia et Gynecologica Scandinavica.
  19. Huang, C. Y., Cheng, M., Lee, N. R., Huang, H. Y., Lee, W. L., Chang, W. H., & Wang, P. H. (2020). Comparing Paclitaxel–Carboplatin with Paclitaxel–Cisplatin as the Front-Line Chemotherapy for Patients with FIGO IIIC Serous-Type Tubo-Ovarian Cancer. International journal of environmental research and public health, 17(7), 2213.
  20. Maheshwari, U., Rajappa, S. K., Talwar, V., Goel, V., Dash, P. K., Sharma, M., … & Doval, D. C. (2021). Adjuvant chemotherapy in uterine carcinosarcoma: Comparison of a doublet and a triplet chemotherapeutic regimen. Indian Journal of Cancer. 
  21. Loizzi, V., Del Vecchio, V., Gargano, G., De Liso, M., Kardashi, A., Naglieri, E., … & Cormio, G. (2017). Biological pathways involved in tumor angiogenesis and bevacizumab based anti-angiogenic therapy with special references to ovarian cancer. International journal of molecular sciences, 18(9), 1967.

 

Hard Copy Publication References

  1. Torre, L.A., et al. 2018. CA: A Cancer Journal for Clinicians. 68:284-296.
  2. Lisio, M., et al. 2019. International Journal of Molecular Sciences. 20, 952.
  3. Sheng, R., et al. 2020. Cancer Letters. 473:139-147.
  4. Muinai, T. Boruah, H. P. D., Pal, M. 2018. Experimental Cell Research. 362:1-10.
  5. Taylor, K. N., Eskander, R. N. 2018. Recent Patents on Anticancer Drug Discovery. Vol. 13, No. 2.
  6. Cobb, L.P., et al. 2015. Gynecologic Oncology Research and Practice. 2:1.
  7. Lheureux, S., Braunstein, M., & Oza, A. M. 2019. CA: a cancer journal for clinicians. 69(4), 280-304.
  8. Dochez, V., et al. 2019. Journal of Ovarian Research. 12:28.
  9. Ueland, F. R. 2017. Diagnostics. 7(1), 14.
  10. Kamal, R., et al. 2018. The British journal of radiology. 91(1090), 20170571.
  11. Resnick, K. E., et al. 2009. Gynecologic oncology. 112(1), 55-59.
  12. Chen, S. N., et al 2019. International journal of environmental research and public health. 16(9), 1510.
  13. Meng, X., et al. 2016. Oncotarget7(13), 16923.
  14. Wang, W., et al. 2019. The American journal of the medical sciences. 358(4), 256-267.
  15. Pei, C., et al. 2020. Journal of clinical laboratory analysis. 34(7), e23292.
  16. Penson, Richard T et al. 2019. Wolters Kluwer. 
  17. Horowitz, N. S., et al. 2015. Journal of Clinical Oncology. 33(8), 937.
  18. Kessous, R., et al. 2020. Acta Obstetricia et Gynecologica Scandinavica.
  19. Huang, C. Y., et al 2020. International journal of environmental research and public health, 17(7), 2213.
  20. Maheshwari, U., et al. 2021. Adjuvant chemotherapy in uterine carcinosarcoma: Comparison of a doublet and a triplet chemotherapeutic regimen. Indian Journal of Cancer.
  21. Loizzi, V., et al. 2017. International journal of molecular sciences. 18(9), 1967.

Strimvelis: An Application of Personalized Medicine

By Aditi Goyal, Genetics & Genomics, Statistics, ‘22

Author’s Note: I heard about this therapy during a freshman seminar, and I presented on this during that class. This article is an adaptation of that presentation. 

 

ADA-SCID is a rare, autosomal recessive disease that cripples one’s immune system. ADA SCID stands for Severe Combined Immunodeficiency due to Adenosine Deaminase Deficiency, which occurs due to a mutation in the ADA gene [1]. This gene normally assists in the production and regulation of lymphocytes, also known as white blood cells [1]. Specifically, ADA (Adenosine Deaminase) breaks down deoxyadenosine, which is toxic to lymphocytes [2]. In the absence of a working ADA gene, this deoxyadenosine collects in the body and continues to degrade lymphocytes. Eventually, the lack of functioning lymphocytes leads to severe combined immunodeficiency (SCID) [2].

ADA-SCID is typically screened for at birth and has a variety of treatment options. The most common treatment is a bone marrow transplant from a sibling. In this process, stem cells are taken from someone with a matching blood type, and transplanted into the patient, with the hope that these cells will proliferate and produce healthy lymphocytes [3]. While this approach is effective approximately 70% of the time, the real challenge is in matching a patient to a donor. Because the patient’s immune system is already so impacted, there is a high possibility of rejection of the transplant. Additionally, for patients who do not have a sibling or someone in the family who is able to donate, finding a match can be incredibly difficult.

For patients unable to have a transplant, enzyme therapy is also a possible form of treatment [3]. Enzyme Replacement Therapy, ERT, is simply providing the patient with a working copy of an enzyme, ADA in this case [4]. The drawback to this form of treatment is that it requires a patient to be dependent on a hospital for their entire lives. They cannot travel too far away from a hospital for too long, because if they miss a delivery of the enzyme, there can be drastic consequences [5]. Additionally, ERT can lose effectiveness over the years [4].

The third, and still experimental, treatment option is a gene therapy known as Strimvelis [6]. Strimvelis is one of the first gene therapy products to be used anywhere in the world. While it has yet to be approved by the FDA in the United States, it marks a milestone in the development of personalized medicine.

Strimvelis treatment has three steps, starting with harvesting hematopoietic stem cells (HCS’s)  from the patient. These cells carry the mutated ADA gene and are ineffective at supporting catalyzing deoxyadenosine. Once extracted, the corrected ADA gene is delivered to the HCSs in an ex vivo environment using a gammaretrovirus [7]. Once the cells have been transformed, they are delivered back to the patient using an IV drip, and take hold in the body, subsequent to a dose of Busulfan or Melphalan [8]. These two chemotherapy drugs are intended to kill any remaining damaged HCS’s in the body, allowing for the corrected cells to grow without interference. Once injected, the corrected cells will continue to proliferate, producing a healthy amount of ADA. The reason this therapy works well is that the patient’s own HCS’s are used, so there is little to no risk of rejection by the body’s immune system. Another key advantage to using Strimvelis is that it is a single treatment. Once the corrected HCS’s are delivered to the body, the patient is considered “cured” and is no longer reliant on any medical procedures to maintain their immunity.

The results of Strimvelis trials have been incredibly promising. A clinical trial conducted by the European Medicines Agency (EMA) found Strimvelis to have a 100% success rate, leading to its approval by the European Commission about one month later [9]. However, there have been rare cases of Strimvelis leading to patients developing T-cell leukemia [10]. These cases have led to the parent company of Strimvelis, Orchard Therapeutics, halting all administration of Strimvelis until an investigation on the possibly cancerous effects of Strimvelis has been completed [11].

Another primary drawback of Strimvelis is its cost. Strimvelis costs 594,000 euros per patient, which is equivalent to approximately 650,000 dollars [12]. While Strimvelis is not the most expensive gene therapy on the market, the cost is still incredibly restrictive, as the average middle-class family would not be able to afford this treatment.

The reason the cost for this treatment is so high is that ADA-SCID is considered an orphan disease. Orphan diseases are conditions that affect under 200,000 people worldwide [13], which means that from the perspective of a pharmaceutical company, it is not cost-effective to develop a treatment. ADA-SCID only affects around 350 people worldwide [2]. Therefore, the cost per patient is high, since there are not that many people affected by this disorder and because the therapy cannot be mass-produced.

Strimvelis is not perfect, by any means. There are still thousands of unknowns surrounding gene editing, and the side effects are dramatic. Even with Strimvelis on the market, it is not the number one treatment option for most ADA-SCID patients. Nevertheless, it is a step forward. In a world where we learn more about our genetics every day, Strimvelis is a milestone in the development of personalized medicine. 

 

References

  1. Adenosine deaminase DEFICIENCY: MedlinePlus Genetics. (2020, August 18). Retrieved March 10, 2021, from https://medlineplus.gov/genetics/condition/adenosine-deaminase-deficiency/
  2. Hershfield M. Adenosine Deaminase Deficiency. 2006 Oct 3 [Updated 2017 Mar 16]. In: Adam MP, Ardinger HH, Pagon RA, et al., editors. GeneReviews® [Internet]. Seattle (WA): University of Washington, Seattle; 1993-2021. Available from: https://www.ncbi.nlm.nih.gov/books/NBK1483/
  3. Kohn DB, Hershfield MS, Puck JM, Aiuti A, Blincoe A, Gaspar HB, Notarangelo LD, Grunebaum E. Consensus approach for the management of severe combined immune deficiency caused by adenosine deaminase deficiency. J Allergy Clin Immunol. 2019 Mar;143(3):852-863. doi: 10.1016/j.jaci.2018.08.024. Epub 2018 Sep 5. PMID: 30194989; PMCID: PMC6688493.
  4. Adenosine deaminase deficiency: Treatment and prognosis. (n.d.). Retrieved March 10, 2021, from https://www.uptodate.com/contents/adenosine-deaminase-deficiency-treatment-and-prognosis#H2288540670
  5. Scott O, Kim VH, Reid B, Pham-Huy A, Atkinson AR, Aiuti A, Grunebaum E. Long-Term Outcome of Adenosine Deaminase-Deficient Patients-a Single-Center Experience. J Clin Immunol. 2017 Aug;37(6):582-591. doi: 10.1007/s10875-017-0421-7. Epub 2017 Jul 26. PMID: 28748310.
  6. Aiuti, Alessandro et al. “Gene therapy for ADA-SCID, the first marketing approval of an ex vivo gene therapy in Europe: paving the road for the next generation of advanced therapy medicinal products.” EMBO molecular medicine vol. 9,6 (2017): 737-740. doi:10.15252/emmm.201707573
  7. Candotti F (April 2014). “Gene transfer into hematopoietic stem cells as treatment for primary immunodeficiency diseases”. International Journal of Hematology. 99 (4): 383–92. doi:10.1007/s12185-014-1524-z. PMID 24488786. S2CID 8356487.
  8. Touzot F, Hacein-Bey-Abina S, Fischer A, Cavazzana M (June 2014). “Gene therapy for inherited immunodeficiency”. Expert Opinion on Biological Therapy. 14(6): 789–98. doi:10.1517/14712598.2014.895811. PMID 24823313. S2CID 207483238.
  9. https://www.ema.europa.eu/en/documents/product-information/strimvelis-epar-product-information_en.pdf
  10. Stirnadel-Farrant, Heide et al. “Gene therapy in rare diseases: the benefits and challenges of developing a patient-centric registry for Strimvelis in ADA-SCID.” Orphanet journal of rare diseases vol. 13,1 49. 6 Apr. 2018, doi:10.1186/s13023-018-0791-9
  11. Orchard statement on Strimvelis®, a Gammaretroviral Vector-Based gene therapy For ADA-SCID. (n.d.). Retrieved March 10, 2021, from https://ir.orchard-tx.com/index.php/news-releases/news-release-details/orchard-statement-strimvelisr-gammaretroviral-vector-based-gene
  12.   Mullin, E. (2020, April 02). A year after approval, gene-therapy cure gets its first customer. Retrieved March 10, 2021, from https://www.technologyreview.com/2017/05/03/152027/a-year-after-approval-gene-therapy-cure-gets-its-first-customer/#:~:text=Strimvelis%2C%20used%20to%20treat%20an,Money%2DBack%20Guarantee%E2%80%9D).

Combating Malaria: Genetically Modified Mosquitoes Projected to Prevail Over Traditional Methods

By Marian Warner, Biotechnology ‘21

Author’s Note: I chose the subject of gene drives for UWP 104E (writing in science class) because I found it personally interesting and wanted to learn more about its controversy. The more I did research on the subject throughout the quarter the more I realized how much is unknown by the general public. The mechanism is a bit complex, so there are not many sources that attempt to explain it to a generalized audience. I hope this paper does a good job of helping those unfamiliar with gene drives become interested and gain a grasp on the scientific backing behind it. I also hope it aids them in forming an informed opinion on the subject or gives them an idea of what further questions they may want to ask. 

 

Malaria, a mosquito-borne disease, has been a stubborn, unrelenting issue despite the established traditional methods used to combat it. Because the female mosquito in the Anopheles genus is the intermediary host transmitting the infectious agent, Plasmodium parasites, from an infected to a non-infected person, current preventative efforts mainly target the mosquitoes by dispersing insecticide-treated bed nets, insecticide for indoor use, and occasional environmental control by destroying larval habitats [1]. For infected individuals, there are treatment options such as antimalarial drugs that can suppress infections, but these target the Plasmodium parasites instead. Although these efforts have helped slow the spread of malaria, it still affected 229 million people and killed an estimated 409,000 in 2019 [2]. Fortunately, scientists such as Andrea Beaghton from Imperial College London have developed a promising strategy using genetically modified mosquitoes to rapidly spread the male-biased sex determination genes in order to exponentially decrease the amount of breeding in a population, which has proven to be effective in completely wiping out populations of malaria-carrying mosquitos in as little as 30 generations in a laboratory setting [3]. The complete collapse of malaria-carrying mosquito populations due to using this strategy in the wild would efficiently lead to the eradication of malaria. 

 

The Role of Selfish Genetic Elements

The strategy works by utilizing a gene drive, a technique that spreads a gene throughout a population at an abnormally fast rate. Usually, any given copy of a gene has a 50 percent chance of being passed down to one’s offspring. This occurs because each diploid organism, such as a human or an insect, carries two different alleles, or variations, of a given gene. One allele is inherited from each of that organism’s parents. 

When it comes to a gene drive, however, the allele in question is almost always inherited. Scientists use a naturally occurring selfish genetic element to propel gene drives. Selfish genetic elements, or selfish genes, are alleles that convert the other inherited allele into a copy of itself. In this scenario, the gene no longer has a 50 percent chance of being passed down, but instead nearly a 100 percent chance [Figure 1]. 

Figure 1

The Mechanism Behind the Selfish Genetic Element

 To convert the other allele inherited into a replica of the selfish gene, the selfish gene exploits a natural process the genome uses to repair itself, known as homology driven repair, allowing harmful genes to bypass the rules of natural selection. The mechanism works by encoding a pair of biological scissors, an endonuclease, that cut DNA at the position of the second allele. Once the endonuclease slices open a segment of DNA, the DNA is more unstable and gets chewed back, resulting in the allele being lost. The cell registers the damage and repairs the region by copying and pasting the DNA sequence of the selfish genetic element into the spot where the naturally occurring allele was previously, resulting in both alleles being that of the selfish gene [4]. The only reason why the mechanism doesn’t work 100 percent of the time in practice is because of occasional issues the endonuclease has with recognizing the target allele [5]. 

 

The Anopheles Gambiae Gene Drive

With new powerful genome editing technologies, scientists can choose any gene to be a selfish genetic element. Many have focused on trying to identify and genetically modify an allele that could wipe out the Anopheles gambiae population, the most common mosquito species involved with the spread of malaria. However, this technique is not as simple as it seems at first glance. Trying to spread a deadly allele would not work, because the mosquitoes need to be alive and reproducing to spread it. Spreading an allele that makes mosquitoes weaker would not work either. For example, trying to eradicate their ability to fly could help suppress a population briefly, but the chances of mosquitoes forming resistance to the mechanism of this gene drive due to natural selection is high. 

Instead, scientists have decided to target genes that are involved with sex determination, to make the population spread genes that reduce female survivorship. Under this gene drive, females are born with intersex mouth parts that do not allow them to feed, and therefore die relatively quickly. Targeting females is beneficial because they are the only ones capable of transmitting malaria. Additionally, as the population becomes increasingly more male-dominated, chances of reproduction become slimmer, and fewer mosquitos are born. The specific sex-determining genes for this are unlikely to invoke evolved resistance, because any changes to a pathway as specific as the sex determination pathway will most likely result in detrimental effects on the mosquito, preventing the spread of the mutations by natural selection [3]. As the gene drive rapidly spreads throughout the population, the number of malaria cases would rapidly decrease. Compared to traditional methods, the strategy would be incredibly efficient and a low cost, but there is a potential for unknown side effects, leading many to believe that using traditional methods is the best option for now. 

 

Potential Consequences of Traditional Methods and Gene Drives

Despite the efficiency gene drives are thought to have, sceptics often argue that the current methods used to control malaria have a much lower risk of adverse side effects. Experts have approved pyrroles and pyrethroids as insecticides for mosquito bed nets because of their relatively low consequences on human health. However, mosquitos are now evolving resistance to pyrethroids. To reduce the odds of mosquitoes becoming resistant, many bed nets include multiple insecticides. However, there is not yet any evidence that these nets work in regions that already have high levels of pyrethroid resistance [6]. Similarly, Plasmodium parasites have also been found to harbor drug resistance to antimalarial drugs. Partial resistance to the drug artemisinin has already been detected in over five percent of some Plasmodium populations, and several other types of drug resistance have been detected as well. As of now, insecticides and antimalarial drugs can continue to be used effectively, but resistance to them must be closely monitored by collecting data on malarial drug treatment cases and looking for molecular markers of resistance in natural populations to prevent these methods from becoming useless in the future [1].

Many scientists agree that gene drives should be further studied as they currently have potential for more concerning side effects than traditional methods. One example of a significant concern is the unknown effect on the food chain from eliminating the A. gambiae populations [7]. So far, studies show that few animals rely solely on A. gambiae as a food source, so many experts believe the chance of negative environmental impacts are slim although there may always be the potential for side effects that were not studied in a specific sub-population or environmental niche [8]. Another concern is the potential for unethical uses with this new technology [7]. If, for example, someone releases a gene drive before enough research has been done and before it has been approved by a regulatory agency, serious environmental consequences could take place. Therefore, laws should be put in place to prevent such a thing from happening. Jim Thomas, a member of the Action Group on Erosion, Technology and Concentration says, “So far, all the proposals around gene drives are things like voluntary ethics codes and agreements between funders. They’re not binding in any way, so to what extent they can be enforced and who would be liable in the event of a problem — there’s none of that” [7]. Kevin Esvelt, one of the researchers who helped engineer the first gene drive, agrees that gene drive technology development could lead to consequences. “This isn’t just going to be about malaria,” Esvelt said. “This is potentially going to be something any individual who can make a transgenic fruit fly could build to edit all the fruit flies” [9].

 

The Costs of Traditional Methods and Gene Drives

Despite potential ethical and environmental concerns about the technology, the cost of gene drive research and implementation is overall lower than the cost of traditional methods and would save money in the long run. Comparatively, the cost of producing and dispersing nets, insecticides, and drugs each year is more expensive than developing and releasing a successful gene drive. The World Health Organization estimated that about $6.8 billion in resources for malaria prevention was needed in 2020, and that the cost will continue rising each year by an estimated additional $720 million [2]. As long as there is no complete way to prevent malaria, it is unlikely for the annual cost of these developments to disappear any time soon. 

Unlike traditional methods, gene drive research has the potential to eradicate malaria completely and thereby curb all expenses involved in malaria research and malaria equipment dispersal. The Bill and Melinda Gates Foundation contributes a majority of grants going into gene drive research, donating about $7.4 million in grants in 2020. This funding has been going towards furthering promising research on gene drives and studies on the environmental effects of gene drives [10]. Ongoing research in future years may continue to require similar amounts of funding to the amount from current grants. However, this price is relatively small considering how much funding goes into malaria prevention and control annually, as well as the potential for a gene drive to completely curb the need for future funding of any kind. The cost of real life implementation is thought to be negligibly small, due to the process involving the release of only a small population of mosquitoes into the wild.

 

Efficiency of Traditional Methods and Gene Drives

Although cost is a big factor, the main reason for the huge support of gene drive research is the evidence pointing to a gene drive being a much more effective method than current traditional strategies. Current strategies such as insecticide use and drug use have not led and will unlikely lead to the elimination of malaria. Data suggests an overall trend towards fewer malaria cases likely due to traditional methods currently in place [1]. However, a full eradication of malaria around the world is the ultimate goal. Despite prevalent insecticide use and mosquito population control, there is still always a chance of a deadly mosquito bite in areas hard hit by malaria. Deadliness is especially the case when the issue of drug resistance pertains in Plasmodium. Mutations involved in partial drug resistance have already been detected in Plasmodium, and the more drugs continue to be used, the more likely resistance will continue to develop. Overall, insecticides can only be somewhat effective and cases of treatment failure are on the rise [1].

Gene drives, on the other hand, have been incredibly promising when it comes to efficiency. In the study by Beaghton, when the gene drive allele was released in a caged population and only made up 2.5 percent of the population, the entire population was predicted to crash in at least 30 generations [3]. Some models predict that even less mosquitos would need to be released into the wild in a real life scenario. One predicted that the release of just 500 gene drive mosquitoes could result in the complete collapse of a targeted mosquito species population in the timeframe of eight years [11]. Further mathematical models may be used in the future to calculate the optimal percent of the genetically modified mosquitoes that could be released to wipe out the population in the shortest timeframe feasible. From there, more gene drives targeting the other species of malaria-carrying mosquitos could be released, which would lead to a complete eradication of malaria. For now, scientists must continue to study the safety of gene drives by studying them in the lab, computationally, and perhaps in small contained real life settings. Additionally, further laws and policies will hopefully continue to be developed over gene drives in order to regulate the powerful technology. With enough time and research, affected communities and authorities may approve the gene drive strategy and begin implementing it in the near future. 

 

Bibliography

  1. World Health Organization. 2019. World malaria report 2019. Geneva, Switzerland: World Health Organization. 
  2. World Health Organization. 2020. World malaria report 2020. Geneva, Switzerland: World Health Organization. 
  3. Simoni, A., Hammond, A. M., Beaghton, A. K., Galizi, R., Taxiarchi, C., Kyrou, K., Meacci, D., Gribble, M., Morselli, G., Burt, A., Nolan, T., & Crisanti, A. (2020). A male-biased sex-distorter gene drive for the human malaria vector Anopheles gambiae. Nature biotechnology, 38(9), 1054–1060. https://doi.org/10.1038/s41587-020-0508-1
  4. Windbichler, N., Menichelli, M., Papathanos, P. A., Thyme, S. B., Li, H., Ulge, U. Y., Hovde, B. T., Baker, D., Monnat, R. J., Jr, Burt, A., & Crisanti, A. 2011. A synthetic homing endonuclease-based gene drive system in the human malaria mosquito. Nature [Internet]. 473(7346), 212–215. doi:10.1038/nature09937
  5. Oberhofer, G., Ivy, T., & Hay, B. A. (2018). Behavior of homing endonuclease gene drives targeting genes required for viability or female fertility with multiplexed guide RNAs. Proceedings of the National Academy of Sciences of the United States of America, 115(40), E9343–E9352. https://doi.org/10.1073/pnas.1805278115
  6. Centers for Disease Control and Prevention. Insecticide-Treated Bed Nets. Accessed July 15, 2020. Available from: www.cdc.gov/malaria/malaria_worldwide/reduction/itn.html. 
  7. Kahn, Jennifer. 2020. The Gene Drive Dilemma: We Can Alter Entire Species, but Should We? The New York Times Magazine. 
  8. Collins, C. M., Bonds, J., Quinlan, M. M., & Mumford, J. D. (2019). Effects of the removal or reduction in density of the malaria mosquito, Anopheles gambiae s.l., on interacting predators and competitors in local ecosystems. Medical and veterinary entomology, 33(1), 1–15. https://doi.org/10.1111/mve.12327
  9. Scudellari M. (2019). Self-destructing mosquitoes and sterilized rodents: the promise of gene drives. Nature, 571(7764), 160–162. https://doi.org/10.1038/d41586-019-02087-5
  10. Bill and Melinda Gates Foundation. Awarded Grants. Accessed 15 July 2020. Available from: www.gatesfoundation.org/How-We-Work/Quick-Links/Grants-Database#q/k=gene%20drive
  11. Eckhoff PA, Wenger EA, Godfray HC, Burt A. 2017. Impact of mosquito gene drive on malaria elimination in a computational model with explicit spatial and temporal dynamics. Proc Natl Acad Sci U S A [Internet]. 114(2):E255-E264. doi: 10.1073/pnas.1611064114.

A Dive into a Key Player of Learning and Memory: An Interview with Dr. Karen Zito

Image by MethoxyRoxy – Own work, CC BY-SA 2.5

By: Neha Madugala, Neurology, Physiology, and Behavior, ‘21

Author’s Note: After writing a paper for the Aggie Transcript on the basics of dendritic spines, I wanted to take a more in-depth look at current research in this field by interviewing the UC Davis professor Karen Zito, who is actively involved in dendritic spine research. While there are still a lot of questions that remain unanswered within this field, I was interested in learning more about current theories and hypotheses that address some of these questions. Special thanks to Professor Zito for talking to us about her research. It was an honor to talk to her about her passion and knowledge for this exciting and complex field. 

 

Preface: This interview is a follow-up to an original literature review on dendritic spines. For a more in-depth look at general information on dendritic spines, check out this article

 

Neha Madugala (NM): Can you briefly describe some of the research that your lab does? 

Dr. Karen Zito (KZ): My lab is interested in learning and memory. Specifically, we want to understand the molecular and cellular changes that occur in the brain as we learn. Our brain consists of circuits, connecting groups of neurons, and the function of these circuits allows us to learn new skills and form new memories. Notably, the strength of connections between specific neurons in these circuits can change during learning, or neurons can form new connections with other neurons to support learning. We have been mainly focusing on structural changes in the brain. This includes questions such as the following: How do neurons change in structure during learning? How do new circuit connections get made? What are the molecular signaling pathways that are activated to allow these changes to happen while learning? How is the plasticity of neural circuits altered with age or due to disease?

NM: What are dendritic spines? 

KZ: Dendritic spines are microscopic protrusions from dendrites of neurons and are often the site of change associated with learning. Axons of one neuron will synapse onto the dendritic spine of another neuron. Spines will grow and retract during development and synapses between a spine and axon will form during learning, forming complex circuits that allow us to do intricate tasks such as playing the piano. 

NM: Transient spines only last a couple of days. What role do they play in learning? 

KZ: One hypothesis for the function of transient spines is that they exist to sample the environment, allowing the brain to speed up its ability to find the right connections required for learning. Thus, the rapid growth and retraction of transient spines in the brain helps our neurons find the right connections required to form the new neural circuits by sampling many more connections and narrowing in on the right ones. For instance, in a past study on songbirds, researchers found that baby songbirds with faster moving transient spines were able to learn songs quicker than baby songbirds with slower moving transient spines. Once these transient spines find the right connection, they will transition from transient to a permanent spine to partake in a circuit that supports a new behavior, such as the songbird learning a new song.  

NM: Can presynaptic neurons directly synapse onto a dendrite or only a dendritic spine?

KZ: Many neurons do not have spines at all. Spines are predominantly present on neurons in the higher order areas of the brain involved in learning, memory, perception and cognition. Spiny neurons are present in areas of the brain where neural connections are changing over time, or plastic — allowing the brain to learn, adjust, and change. Certain areas of the brain do not require a lot of change and, in some cases, circuit change may be detrimental to function. For example, we may not want to change connections established for movement of specific muscles.

NM: What is the difference between synapses that occur directly on a dendrite versus onto a dendritic spine?

KZ: Importantly, the molecular composition at synapses can vary widely between synapses, regardless of whether this connection occurs at a shaft or a spine. Therefore, it is difficult to name specific compositional elements always found at a spine versus a shaft. Inhibitory synapses, formed by GABAergic neurons, tend to be found directly on the shaft of dendrites. Glutamatergic neurons, which are excitatory, in the cerebral cortex tend to synapse on dendritic spines, but can also connect directly with the dendrite.  

NM: All dendritic spines have excitatory synapses that require NMDA and AMPA receptors [1]. Are these receptors necessary for these spines to exist?

KZ: To my knowledge, we do not know the answer to this question. It is possible to remove these receptors a few at a time, and spines do not disappear. However, it is really hard to remove a receptor from a single spine and, if the receptors are removed from the entire neuron, it is often replaced with another receptor in a process called compensation. In order to test if this is possible, someone would have to knock out all genes encoding AMPA receptors and NMDA receptors, which is over seven genes, to see if spines still formed. Notably, if AMPA receptors are internalized, the spine typically shrinks, and if more AMPA receptors are brought to the surface, the spine typically grows. Indeed, the number of AMPA receptors at the synapse is directly proportional to the size of the spine. 

NM: What drives spine formation and elimination when creating and refining neural circuits? 

KZ: There really is no definitive answer to this question currently, and many of those performing dendritic spine research are interested in answering these questions. Let’s first look at formation. One theory suggests that there are factors coming from neighboring neurons, such as glutamate or BDNF [2], which promote spine formation. However, it is unclear which of these are acting in vivo, in the animal. Also, spine formation is much greater in younger animals compared to older animals. That can suggest that the cells are in a different state when younger versus older. The cell can be a less plastic state where all the spines are moving slowly, seen in older animals, or more plastic states where all the spines are moving more quickly, seen in younger animals. Thus, there appears to be a combination of intrinsic state, or how plastic the cell is, and extracellular factors such as the presence of glutamate that dictates spine formation. Elimination is similar in that we do not really know the entire molecular signaling sequence that is driving it. It is a fascinating question for so many reasons. For example, when we are young we overproduce spines, and as we grow the spine density declines as our nervous system selectively chooses which connections to keep. Then, as adults, our spine density remains relatively stable. However, we obviously keep learning as adults, even though our spine density remains constant. One hypothesis is that, as a spine grows while learning, a nearby spine with no activity shrinks and eventually becomes eliminated. In fact, we have observed this phenomenon in our studies. Therefore, there may be some local competition between these spines for space. This keeps the density the same across most of the adult life span. 

NM: Does learning drive the formation of synaptic spines or does synaptic spine formation drive learning? 

KZ: This may depend on the type of learning. Both have been observed. Studies have been done imaging the brain during learning. Some people have found an increase in new spine growth suggesting that learning drives new spine formation. Other people say they found the same number of new spine growth, but a greater amount of new spine stabilization, suggesting that learning drives new spine stabilization. 

 NM: It has been observed that some intellectual disabilities and neuropsychiatric disorders are associated with an abnormal number of dendritic spines when compared to a neurotypical individual. Is this related to the insufficient production of dendritic spines at birth or deficits in pruning? 

KZ: Indeed autism spectrum disorders have been associated with an increase in spines. This could potentially be associated with an overproduction of spines or reduced spine elimination. Notably, the majority of neurological disorders resulting in cognitive deficits, such as Alzheimer’s disease, are associated with decreased spine densities. It is unclear if the spine numbers or brain function diminishes first, but much of the current research seems to suggest that the spines go away first, leading to the cognitive problems observed. In many disorders with too few spines, there is a normal formation of spines but excessive elimination. This is seen as Alzheimer and schizophrenic patients’ spine density is relatively normal prior to disease onset. For Alzheimer’s specifically, some researchers suggest that the molecular release of the pathogenic amyloid beta peptide binds to molecules on the surface of the dendritic spine that drive spine loss. 

NM: How might dendritic spine research help in treating neuropsychiatric and neurodegenerative disorders?

KZ: Current research is looking at how to stabilize and destabilize dendritic spines. If we were able to manipulate the stability of these spines, we could potentially help rescue the stability of spines in patients with neuropsychiatric disorders, which could potentially lead to better therapies and outcomes. Understanding the pathways that control the stability of these spines will allow researchers to find targets for future therapeutic treatments.

 

Footnotes

  1. Receptors that are permeable to cations. They are usually associated with the depolarization of neurons.
  2. Brain-derived neurotrophic factor (BDNF): Plays a role in the growth and development of neurons.

The Technological Impact on Coffee Growing in the Face of Climate Change

By Anushka Gupta, Genetics & Genomics, ‘20

Author’s Note:  Climate change is an important topic and must be discussed in order to mitigate the severe consequences. Unbeknownst to most people, however, coffee is also heavily impacted by climate change due to the sensitive conditions necessary for proper cultivation. I hope I can bring to light some of the less serious impacts of climate change and how something normal, like coffee, may become extinct without interference. 

 

Over fifty percent of Americans enjoy a daily cup of coffee, with over 500 million cups of coffee  served everyday. Unfortunately, with the increasing temperatures due to climate change, coffee is at high risk of extinction. However, with new advances in technology, coffee can now be grown in a wider range of environmental conditions. Specifically, the integration of modern technology to pre-existing growing practices and the use of artificial intelligence have both contributed to making a future with coffee a possibility.  

To understand the new developing technologies, it is crucial to understand the severity of climate change and how it specifically affects the coffee production business. Given the rapidly increasing rate of global temperatures, coffee will likely be much more expensive and be of a much lower level of quality within just 30 years. On top of this, the amount of land that is available for coffee growing will be cut in half by 2050, according to Climate Institute, a company in Australia. Coffee is grown mostly in tropical regions, like Honduras and Brazil, which also happen to be the regions hardest hit by climate change. In fact, the top countries that are most affected by climate change are the same countries where the majority of the world’s coffee beans are grown. 

This becomes problematic as coffee beans are also extremely sensitive to temperatures outside of their ideal growing temperature. Most coffee beans will only grow in the range of 18°C to 21°C at high altitudes. They also require a perfect amount of rain, as anything outside of these optimal conditions will damage or even kill the plants. Climate change has already had its effects on coffee production around the world. Heavy rains in Columbia, droughts in Indonesia, and coffee leaf rust (a fungus that attacks coffee bean leaves) in Central and South America have significantly decreased the coffee yield in the past few years. These are only a few of the many examples of how coffee growers are struggling to maintain their crop yield each year [1]. 

One way coffee growers are preparing for climate change is by engineering a new resistant strain of coffee beans. Currently, only one type of coffee bean, Arabica, dominates the entire industry. Arabica is known for its high quality flavor and aroma, but lacks genetic diversity, commonly leaving it susceptible to coffee leaf rust [2]. The coffee leaf rust fungus preys on the leaves of coffee plants, and eats away at the leaf until an orange-brown color is left instead of the previous green leaf, thus destroying the plant’s ability to make its own energy [3]. The lack of genetic diversity allows for this fungus to spread rampantly across coffee bean farms. For instance, if one strain of the fungus affects a particular variety of Arabica, it is also extremely likely that other Arabica plants will also be afflicted [2]. With increasing temperatures already posing a greater threat to plants, the leaf rust fungus is expected to have an even more apparent impact on coffee yield. In addition, the availability of farmable land is decreasing, as coffee plants only grow in a narrow range, a range that is shrinking due to increasing global temperatures. However, creating a hybrid coffee bean can resolve this problem by choosing strains in hopes of achieving a desired quality, such as coffee leaf rust resistance [2].  

Coffee breeder William Solano works at the tropical agricultural research and higher education center (CATIE) in Costa Rica doing just that. He works on creating coffee hybrids by combining genetically distant yet complementary coffee strains in hopes to achieve a product that takes in characteristics from each parent coffee strain [3]. At CATIE, he created the Centroamericano coffee bean, a cross between the Ethiopian landrace variety Rume Sudan and another coffee bean called T5296, which is known for its coffee leaf rust resistance [2]. On its own, the Centroamericano has proven to be twenty percent more productive than other coffee beans and is tolerant to coffee leaf rust. However, it soon became clear that it fares better against the effects of climate change as well, as it can survive temperatures below freezing. This was an especially surprising find as the plant was originally designed with only disease resistance in mind [3].  

On top of changing the coffee bean, scientists are also finding a way to use new technologies to make coffee farming more efficient. The most promising example seen so far has been the implementation of artificial intelligence (AI) to the coffee growing business. AI technology now allows farmers to accurately analyze soil fertility properties and compute an estimation of coffee yields [4]. The American technology company IBM has developed an AI-powered device that does just that. The device, called the AgroPad, is portable and about the size of a business card. This device has the capability to quickly analyze the soil to check for chemical composition, allowing coffee farmers to make educated decisions on how to manage their crops. Coffee growers can improve sustainability of their crops as well as save money since they know the amount of water and fertilizer that would be most beneficial to the crop to maximize yield. 

To activate the device, only a small sample size is needed. The sample can either be a drop of water or a liquid soil extract that is produced from a pea-sized clump of dirt, depending on the type of analysis needed. Within about 10 seconds, the device will generate a report using a microfluidics chip inside that performs data analysis. The device can give accurate information on the pH, and amount of various chemicals, such as nitrite, aluminum, magnesium, and chloride. This information is given in the form of circles that correlate to the soil composition. These circles give colorimetric test results, where each circle will represent the amount of a specific chemical that is in the sample [6]. The figure below shows what a sample report may look like. Once this output is given out by the AgroPad, the farmer can use an app to take a picture of the output where the app will read the data. The implementation of this device could allow coffee to be grown in more parts of the world as it will be evident what specifically must be done to ensure the productive growing of coffee in these fields [5]. 

 

Dedicated mobile app scanning of a sample report created by AgroPad

Fig. 1. Peskett, Matt. “IBM’s Instant AI Soil Analysis – the AgroPad.” Food and Farming Technology, 28 Jan. 2020, www.foodandfarmingtechnology.com/news/soil-management/ibms-instant-ai-soil-analysis-the-agropad.html. 

 

Technology has the potential to save some of these coffee plants in the face of climate change, however, at the current rate of climate change, it is difficult to say how the world will look like thirty or even fifty years in the future, and if coffee will be a part of that world. Hopefully, more technological advances will continue to rise over the years giving hope to both coffee growers and coffee drinkers alike around the globe. 

 

Sources

  1. Campoy, Ana. “Another Species Threatened by Climate Change: Your Morning Cup of Coffee.” Quartz, Quartz, 3 Sept. 2016, qz.com/773015/climate-change-will-kill-coffee-by-2100/. Accessed 1 Jun. 2020.
  2. Mu, Alejandra, and Hernandez. “Coffee Varieties: What Are F1 Hybrids & Why Are They Good News?” Perfect Daily Grind, Perfect Daily Grind, 20 Apr. 2020, perfectdailygrind.com/2017/06/coffee-varieties-what-are-f1-hybrids-why-are-they-good-news/#:~:text=Centroamericano%20is%20a%20cross%20between,high%20yielding%20and%20rust%2Dresistant.&text=SEE%20ALSO%3A%20Bourbon%20vs%20Caturra,Variety%20%26%20Why%20Should%20I%20Care%3F. Accessed 1 Jun. 2020.
  3. Ortiz, Arguedas. “The Accident That Led to the Discovery of Climate-Change-Proof Coffee.” MIT Technology Review, MIT Technology Review, 2 Apr. 2020, www.technologyreview.com/2019/04/24/135937/the-accident-that-led-to-the-discovery-of-climate-change-proof-coffee/. Accessed 1 Jun. 2020.
  4. “The Future of Coffee: 3 Technologies to Be on the Lookout for in 2019.” Royal Cup Coffee, 28 Dec. 2018, www.royalcupcoffee.com/blog/articles/future-coffee-3-technologies-be-lookout-2019. Accessed 1 Jun. 2020.
  5. “Enveritas Pilots IBM’s AI-Powered AgroPad to Help Coffee Farmers.” IBM Research Blog, 10 Dec. 2019, www.ibm.com/blogs/research/2019/12/enveritas-pilots-ibms-ai-powered-agropad-to-help-coffee-farmers/. Accessed 1 Jun. 2020.
  6. “No Farms, No Food.” IBM Research Blog, 7 Mar. 2019, www.ibm.com/blogs/research/2018/09/agropad/. 
  7. “IBM’s Instant AI Soil Analysis – the AgroPad.” Food and Farming Technology, 28 Jan. 2020, www.foodandfarmingtechnology.com/news/soil-management/ibms-instant-ai-soil-analysis-the-agropad.html.

The Neck Raising Behavior of Branta canadensis

By Cristina Angelica Bilbao, Biological Sciences ‘22

Author’s Note: I performed this ethological research study for my Zoology class at Las Positas College. I love animals and was excited to have the opportunity to conduct research on an animal of my choice. I chose to research Canada Geese because I grew up around them and was initially scared of them. I wanted this project to be something that would help me understand the complex behavior of geese and provide knowledge to my community. I chose to go to Shadow Cliffs Lake in Pleasanton California because that is where a large population of Canada Geese live year round. 

Above all, I hope this paper can convey that Canada Geese are not as aggressive as we may think. I hope that this paper can provide the knowledge that geese are actually a lot humans. With an understanding of their emotional displays, we can begin to understand them in ways similar to how we understand and interact with one another.

 

ABSTRACT

For centuries, biologists have tried to understand the mechanics of animal behavior. In 1963, Niko Tinbergen published four questions that allowed zoologists to focus on animal behavior in a scientifically rigorous manner. Tinbergen’s Four Questions center around four different concepts that could be related to animal behavior: Causation, Development, Adaptation, and Evolution.

In this ethological study, I aimed to explain if the neck raising behavior of Canada Geese was due to causation or adaptation.  An adaptive behavior is a behavior that has been shaped by an environment over a long period of time, while causative behaviors can be linked to physiological responses to stimuli. I hypothesized that the neck raising behavior done by Canada Geese is an adaptation and not a causative behavior in response to a negative stimulus.

In order to understand this behavior, I observed a population of Canada Geese residing at Shadow Cliffs Lake in Pleasanton, California. Previous studies proposed that Canada Geese raise their necks due to an adaptation and not solely as a reaction to a negative stimulus. This study was done over a period of ten days with a close study and a distant observation study. If the distance was minimal, the geese were indifferent and only raised their necks out of curiosity. If the distance was further away with no external stimulus, the geese would still raise their necks at the same frequency. The results of this experiment was confirmed by concluding that the neck raising behavior of Canada Geese is an adaptation rather than just a reaction to a stimulus.  I was also able to conclude that my observations were able to support the null hypothesis because the geese raised their necks at the same frequency during the close and distant studies. 

 

INTRODUCTION

In the book Geese, Swans and Ducks [1], Canada Geese are described to be sociable and family oriented birds that show their emotions through a variety of displays. 

Emotional responses can be the key to understanding Canada Geese behavior. It has been observed that dominance and aggression tend to be expressed simultaneously among Canada Geese. Bernd Heinrich, a professor of biology at University of Vermont, conducted an observational study on a breeding pair of Canada Geese. He made many personal accounts of how the breeding pair had a high intensity of aggression towards him while guarding their eggs. When Heinrich came back once their young had hatched, the pair was not aggressive and seemed to adjust to the researchers presence [2]. 

The prominent neck of the Canada Goose is an important indicator of emotional responses. Emotional displays characterized by head pumping, withdrawn necks, and vocalizations have been designated as situational and related to attacking or fleeing action [3]. The threat postures were categorized as having a variety of neck movements accompanied by vocalized hissing noises [4]. Another study was able to observe specific members of Canada Geese families taking up ‘guard positions’, which involved various sets of alert and alarm postures [3]. The alert and alarm postures were characterized with similar behaviors relating to threat postures. If the geese were in an alert posture, they would raise their necks high and freeze. If they were in alarm postures, the geese would more than likely begin to display similar threat behaviors of  wing flapping and vocalization to warn the rest of the group. This behavior was seen equally between the male and female geese, especially at the time when they are protecting their nests from predators [3]. 

While it seemed that aggression was the only explanation for this reaction, I began to question if that was the sole conclusion that could be made. The Shadow Cliffs lake population usually could be seen on the beach, in the lake, or on the grass with minimal worry about threats. I observed that the geese lived mutually with the other waterfowl in the area and they seemed to have grown comfortable around humans due to constant contact. This could be one of the reasons that the group chose to stay in this area for prolonged periods of time. 

Before I performed my observational study, I conducted two preliminary observations over two days and was able to notice an unusual behavior. At specific times of the day when the geese fed, members of the group would raise their necks at random. The neck extension would occur for about a minute before they went back to eating again. I believe that this may have been a form of communication, until I noticed another behavior that proved my initial hypothesis to be incorrect. While feeding, the geese would form a tactical perimeter around those who were eating in the center. The geese acting as guards on the outside would raise their necks for the longest periods of time and remain alert. 

As I performed my preliminary research, I aimed to learn more about the population of Canada Geese residing at the lake. I interviewed Mark Berser, one of the Shadow Cliffs Park Rangers who has been working at the park for over 10 years. He confirmed the geese were usually around the lake throughout the year. He stated the tall grass by the water edge close to the picnic benches was a place the geese went to both eat and sleep. He had not observed the frequency of the geese’s neck raising behavior, but he was aware of the fact that they all seemed watchful in their close groups regardless of the threat. 

I expanded my research to online scholarly journals, and I found more information regarding threat postures instead of the specific behavior I aimed to observe. These studies provided a generalized explanation on threat postures, which included neck raising [4].  I was able to find foraging studies that mentioned neck raising behavior as well, but it did not provide details about why the behavior occurred [4].  I was able to deduce that most researchers thought of this behavior as a biological reaction to a stimuli, also known as the attack-flee response [4]. The attack-flee response is a neurological response to a threat that prepares an animal to make the decision fight or to flee. I aim to challenge this idea by hypothesizing that the neck raising behavior done by Canada Geese is an adaptation and not a causative behavior in response to a negative stimulus. 

 

MATERIALS AND METHODS 

Before I began my study, I gathered binoculars for observations and a field notebook to keep records of my observations. Next, I developed a detailed schedule relating to how and when I would observe the geese. I planned to conduct a ten day study on the population of Canada Geese in their habitat at Shadow Cliffs Lake. My observation days were on Mondays and Fridays before noon, in order to reduce the possibility of outside interference. If there was a situation that prevented me from not attending on Monday or Friday, I was able to make observations on another day at the same time in order to prevent discrepancies in the results. 

During the first five days of the experiment, I positioned myself ten feet away from the geese. I casually sat at one of the tables in order not to disturb them in their normal routines. I then focused my observations on the frequency of the neck raising behavior over the course of one hour. Through these five days, I made sure to make note of the frequency of the behavior in my field notebook. Once the first five observation days were complete, the second portion of the experiment began. I made sure to arrive unnoticed as I positioned myself at a further distance away from the population. By keeping distance, I eliminated the possibility of the geese raising their necks as a reaction to my presence. During this time, I watched the geese with binoculars and took note of their neck raising behavior for one hour. I continued to take notes on the frequency of the behavior and I began to transfer the data to a table in order to properly visualize the results.

 

RESULTS

The target population of Canada Geese typically remained in a large group. On random occasions, smaller groups would split off from the rest of the family to either get an early start on bathing in the lake or to find food elsewhere. While they did split, they would eventually come back together in one location.

In large and small groups, there was a known presence of ‘guard’ geese. These geese would take position around the perimeter of the large or smaller groups. When the guard geese noticed me, they raised their necks more frequently and held the position for the longest period of time. The geese observed me before leaning down to eat and switch roles with another member.  

During the first five days of observation, the geese were observed in close proximity. The geese were aware of my presence but they did not spread their wings or pump their necks in a threatened manner.  Instead, they appeared to be indifferent towards me, raising their necks at a very low frequency as they waited to see if I had food. However, the geese would raise their necks more frequently if they felt that I was too close. 

The second half of the observational study was performed at a further distance. The geese were unbothered as they continued with their usual daily routine. The guards and the others in the population raised their necks at low frequency, similar to what was observed in the first part of the study. 

 

CONCLUSION AND DISCUSSION

In this ethological study, I aimed to observe the neck raising behavior of Canada Geese over the course of 10 days. During this observation period, I intended to explain whether or not the behavior was due to causation or an adaptation, as it relates to Tinbergen’s Four questions. Causation would describe the behavior as physiological while adaptation would describe the behavior as a reaction to stimuli developed over time. Through this study I was able to compare whether or not this behavior was more physiological or adaptive by taking certain factors such as visual stimuli and general environment into account. Initially, I hypothesized that the neck raising behavior done by Canada Geese is an adaptation and not solely a physiological response to a negative stimulus. 

To successfully observe this behavior in the chosen target population, a variety of variables were taken into account. The Canada Geese were set to be observed without an external stimuli, which meant that time, presence of food and distance were all important variables. There was no food present and the population was observed at a time with minimal external stimuli exposure. The geese were observed from a far and close proximity in order to prove that neck raising behavior was an adaptation.

To support my hypothesis, observational data was collected over the course of ten days. Chart 1 presents the behavior observations from a short distance. The geese were aware of my presence because they raised their necks frequently, but they seemed indifferent towards me. More specifically, the perimeter guard geese seemed to be raising their necks the most and for the longest period of time. When the designated guards vocalized, the group of geese raised their necks collectively. I kept note of these results as I moved into the next phase of the study. Chart 2 depicts the behavioral observations made from a distance. Similarly, the geese raised their necks at the same frequency to look at their surroundings as they went about their daily routine.

The similarity in the results prove that the behavior of neck raising is not just linked to the presence of a stimulus. In relation to the work done by Blurton-Jones, neck raising and held erect posture could be perceived as a threat posture to drive away predators [8]. At a closer distance, the geese would raise their necks. While this could be seen as a threatened reaction, the behavior seemed to be more of an awareness of my presence rather than aggression. This was confirmed when the geese did not flex their wings or alert the group of a threat. The geese raised their necks and did not give any further reaction unless I got significantly closer. This allowed me to conclude that neck raising behavior is linked to threatening external stimuli. 

With these results in mind, my hypothesis was confirmed. This experiment proves that neck raising behavior seems to be an adaptation developed through time as a method of protection rather than just a reaction to a stimulus. Through the act of raising their necks, the Canada geese make it known that they are aware of threats and are watching out for members of their family. 

In the future, I would hope to perform more in-depth studies. Within my study, I was limited by the population of Canada Geese that I studied. The population of geese was the only significantly large population in Pleasanton, which ultimately limited my results. There was also a limiting factor relating to human contact; the group of geese that I observed had grown used to human contact instead of perceiving them as threats. While the group of geese displayed the behavior I  aimed to observe, the results could be different in situations with limited human contact. 

In future research, populations of Canada geese that have not experienced human influence should be observed. This would be able to prove whether or not the environment is a factor contributing to the neck raising behavior. Further research should include sexing the geese in order to rule out sex being a possible factor contributing to the frequency of neck raising behavior. 

 

References

  1. Kear J. 2005. Canada goose (Branta canadensis).Ducks, geese and swans: general chapters, species accounts (Anhima to Salvadorina). New York (NY): Oxford University Press. P.306-316.
  2. Blurton-Jones NG.1960. Experiments on the causation of the threat postures of Canada geese. Wildfowl. 11(11): 46-52.
  3. Klopman RB. 1968. The agonistic behavior of the Canada goose (Branta canadensis canadensis): I. attack behavior. Behaviour. 30 (4): 287-319.
  4. Raveling DG. 1970. Dominance relationships and agonistic behavior of Canada geese in winter. Behaviour. 37(3/4):291-319.
  5. Hanson HC. 1953. Inter-family dominance in Canada geese. The Auk. 70(1):11-16.
  6. Herrmann D. 2016. Canada geese. Avian cognition: exploring the intelligence, behavior, and individuality of birds. Boca Raton (FL):CRC Press. P.72-143.
  7. Akesson TR, Raveling DG. 1982. Behaviors associated with season reproduction and long-term monogamy in Canada geese. The Condor. 84(2): 188-196. 
  8. Heinrich B. 2010. Parenting in pairs. The nesting season: cuckoos, cuckolds, and the invention of monogamy. Cambridge (MA): Harvard University Press.p. 210-213.

 

Appendix

Chart 1: Field Notes Observation Table Results for Close Study 

Date Day/Description Summary of Behavior 
9/30/19 Day 1- 1:30-2:30 p.m. 

The weather at the time was comfortably warm. The sun was out and there was a light breeze. While it did seem like the perfect day to go out to the lake, there actually weren’t a lot of people in the area. This made it easier to reduce any interference. 

In the large family of geese and even in the smaller sub groups, there were designated ‘guards’. These rotating guards seemed to have a strategy of holding the perimeter. They were the members of the group that actually raised their necks the longest and most frequently. They would hold their necks up for about 10-20 seconds.

-When it comes to noticing me or if I was too close, the guard geese signaled a call to the rest of the group. The call would get the rest of the group to raise their necks. Overall though, they didn’t seem threatened but instead either somewhere in between indifferent or curious about if I had food. 

10/4/19 Day 2- 11:30-12:32 p.m. 

The weather at the time was a little colder than the first observational day. The sun was out but it was all around a little colder with a slight breeze. There were not many people at the lake. 

The entire family of geese would raise their necks at random points. It became clear that the ‘guards’ were in a position where the ones that had the most power to alert the group of any potential problems. 

-The guards raised their necks the most frequently. 

-In an area with no remote threats, the geese still seemed to be very clearly aware of their surroundings. 

10/8/19 Day 3-12:30-1:30 p.m.

The weather today easily symbolized that fall was near. The sun was out but it was cloudy and there was a strong wind. This ultimately made it a cold day. There were people but not many near the geese. 

Similarly to the other two observation days, the geese were still mostly indifferent to my presence. 

-If I got too close, the guard geese would make sure others knew about me, but they weren’t even close to being threatened. 

-The neck raising behavior occurred frequently when I was in the presence of the population. 

10/11/19 Day 4-11:27-12:30 p.m.

The weather today was cold. I could still see the sun but there were more clouds than most of the other days. There also was a slight breeze. Once again there were not many people at the park. 

The geese retained the same indifferent behavior towards me. 

-Most of the time I could justify that they raised their necks simply because they wanted to be aware of me and maybe were hoping that I had food. 

-The guards at the same time still were the ones that raised their necks the most frequently. 

10/16/19 Day 5-6:40-7:40 p.m. 

Mainly because it was the evening, the sun was actually setting. The weather itself started to get windy and cold as the darkness approached. There were still a few people at the lake but they weren’t providing interference to the geese. 

The geese were raising their necks every few minutes like usual and holding the position before they went back to eating.

-It was interesting to see that there were actually more guards around vocalizing and raising their necks quite frequently. 

-The guards were also still raising their necks quite frequently when it came to watching out for the small groups flying to the water to sleep. 

 

Chart 2: Field Notes Observation Table Results for Distance Study: 

Date Day and Description Summary of Behavior
10/19/19 Day 1- 1:20-2:20 p.m. 

The weather at the time was sunny and warm. There weren’t too many people at the lake at this time, which minimized experimental interference. 

The geese still seemed to display the same alert behavior through the action of neck raising. 

-The perimeter guard geese still seemed to be the particular members of the family that raised their necks the most often and longest. The longest neck raised posture that the geese held was about 10 seconds.  

-The neck raising behavior would occur at random points of time to be alert of the surroundings and what the other members of the population were doing. 

10/22/19 Day 2-  2:13-3:30 p.m. 

The weather was sunny and warm, but there was a stronger breeze than the previous observation day. There were a bit more people at the lake today, but they weren’t anywhere near the geese population

The guard geese were still on alert both on the beach and in the park area. 

– It is important to note that while the guards were the ones still raising their necks the most and the longest, the rest of the family was also doing the action. 

-When the guards seemed to vocalize, they had the power to get the whole group to stop eating and raise their necks to attention. The behavior also seemed to occur if two groups were calling to each other from two different locations. 

10/25/19 Day 3-   1:13:2-13 p.m. 

The sun was out but it was a bit colder, hinting at the arrival of the fall season. There were people fishing but since the geese took particular interest in feeding within an enclosed area, observation was not affected. 

The geese in the parking lot and picnic area actually seemed to have similar frequencies of neck raising behavior. 

-In the picnic area, the geese seemed extremely indifferent to the people in the distance and actually laid down. It was interesting to see the guards lay down as well, yet still raise their necks in the same frequency. 

-In regards to the group in the parking lot, it seemed that the neck raising behavior still occurred at the same frequency. The difference was more observable in how long the posture was held, which came out to be about 30 seconds. 

11/8/19 Day 4- 11:30 a.m. -12:30 p.m. 

There were a bit more clouds and a stronger breeze, but I could still clearly see the sun. The observations were carried out with no interference. 

The guard geese prominently were still the usual members of the population raising their necks the longest and most frequently compared to the rest of the population. 

-The geese that took up the roles as the guards happened to be the most alert and aware of the surroundings and other members of the family. 

– The neck raising behavior almost seemed summed as both a reaction to family calls and a general adaptation to watch the surroundings for any threats to the population. 

11/15/19 Day 5- 12:40-1:40 p.m. 

The sun was out but the overall weather was cold, mainly due to the strong breeze. Once again there weren’t many people around, which allowed for the geese to not get distracted. 

On the final day of observation, the geese were in a much larger group feeding in the park area. There was also a small group on the beach.

-It was clear that there were designated guards, which told me that these roles were not just temporary. 

-The whole family would raise their necks, but it was the guards that raised their necks most frequently. 

-It seemed that the neck raising behavior was occuring quite randomly. Realistically this could mean that the geese raising their necks could be an adaptation to be aware of what the family is doing. It could also be an adaptation to avoid predators. 

 

Field Note Photos and Supplementary Images:

  • The first set of images are entries from my field journal. Though they may be difficult to see, the data has been summarized in the tables above.
  • The second set of images depicts two instances where the geese were displaying neck raising action. They also show the guard geese in place.

 

Image 1: This image was taken at the time the close distance observations were being conducted. In this image, a group of geese can be seen eating on the grass of the park hill.  On the far corners of the image, two guard geese can be seen taking up perimeter positions around the rest of the group.

 

Image 2: This image was taken at the time close observations were being conducted for the experiment. In this image, two Canada Geese, possibly a mating pair, were foraging in the grass. The current acting guard can be seen on the left.

Will This Pandemic Unite Us Against Climate Change?

By Pilar Ceniceroz, Environmental Science and Management ‘21

Author’s Note: I originally wrote this piece for a UWP104E assignment. However, the topic remains relevant to people all around the world. In the past, it has been hard to visualize our individual impacts on the environment. COVID-19 has become a great example of how behavioral changes can drastically transform our surroundings. I would like my readers to understand the power of unity in the face of what might be the next global crisis, climate change. 

 

Introduction 

After the World Health Organization (WHO) declared a global health emergency on January 30th 2020, the world has seen extreme changes as the daily lifestyle of almost everyone in the world has been rapidly altered [1]. The ongoing effort to slow the spread of the virus while sheltering-in-place has not been without sacrificecountless people lost their jobs, most cannot physically go to school, and everyday activities have been significantly modified. However, this halt of “business as usual” has fascinating impacts on the environment. Stay-at-home orders shut down production in industrial facilities and power plants and minimized personal vehicle use [2]. With a major decrease in economic activity, highly polluted cities around the world are now seeing clearer skies. The seemingly dull, repetitive routine of quarantine life has allowed the environment to flourish.While consequences of COVID-19 include global economic devastation, the environment has seen both indirect positive and negative impacts as a result of stay-at-home orders and the declining economy. Regions with COVID-19 restrictions experienced a decrease in air and water pollution. These restrictions included a stop to nonessential work and travel as well as closing of restaurants and bars. Concurrently, the amount of single use products has significantly increased to limit the spread of the virus. Decreasing air pollution is a major milestone for our modern world, however, as the world returns to normal life, pollution levels will follow. Although a short term decrease in greenhouse gas (GHG) emissions is not a sustainable way to support the environment, communities around the globe have witnessed the instantaneous impacts of our everyday habits on the environment due to COVID-19. 

 

Drop in Atmospheric and Water Pollution 

Today, 91% of the world population lives in places where poor air quality exceeds the permissible limits set by the WHO [2]. Air quality is an important contributor to human health and living in an area with poor air quality can exacerbate the symptoms of COVID-19. According to the 2016 WHO report, air pollution contributes to 8% of total deaths in the world [2]. Countries that normally struggle with unhealthy air, such as China, USA, Italy, and Spain, have since seen clearer skies for the first time in decades after taking aggressive measures to slow the spread of the virus. There has been a dramatic decrease in the amount of CO2, NO2, and particulate matter emitted in China with the halt of industrial operations with the decrease in demand for coal and crude oil (see fig. 1) [3]. 

 

Changes in nitrogen dioxide emission levels in China from before and after lockdown. 

Fig. 1. Zambrano-Monserrate, Manuel A., et al. “Indirect Effects of COVID-19 on the Environment.” Science of The Total Environment, vol. 728, 20 Apr. 2020, p. 138813., doi:10.1016/j.scitotenv.2020.138813.

 

Since this same time last year, air pollution levels have dropped 50% in New York [1]. There has been a 25% decrease in air pollution since the start of this year in China, one of the largest manufacturing countries [1]. The closing of factories contributed to a 40% reduction in coal usage at one of China’s largest power plants [1]. The average coal consumption of power plants has reached its lowest point in the past four years [3]. Clearly, the outbreak has improved short term air quality and has contributed to reducing global carbon emissions. Fewer flights and social distancing guidelines have reduced carbon emissions as well as other forms of pollution. Tourism significantly decreased worldwide, and beaches around the world have been cleaned up. For example, citizens of Venice, Italy were amazed to see crystalline waters and healthy fish in their canals [1]. 

 

 Comparison of air quality in some of the biggest cities around the world before the COVID-19 pandemic and while the lockdown. 

Fig. 2. Saadat, Saeida., et al. “Environmental Perspective of COVID-19.” Science of The Total Environment, vol. 728, 22 April 2020, p. 138870., doi:10.1016/j.scitptenv.2020.139815. 

 

Increased Single Use Plastics 

In order to completely analyze the impact of COVID-19 on environmental health, the negative impacts on the environment from the virus are equally as important as the positive effects. Although travel restrictions have led to less pollution caused by tourism, the amount of single use plastics and medical equipment has significantly increased waste around the world. In the USA, there has been a significant increase in the amount of single-use personal protective equipment, such as masks and gloves [1]. 

Imagine the amount of trash created when millions of people use one or a couple of masks daily, single use gloves and hand sanitizers. With a population of eleven million people, the city of Wuhan produced an average of 200 tons of clinical trash on any single day in February 2020, compared to their previous average of fifty tons per day [1]. This number is four times the amount the city’s only dedicated facility can incinerate per day [1]. 

The demand for plastics has increased as consumers move to online purchasing. Shelter-in-place guidelines established in most countries have driven consumers to increase their demand for online orders and home delivery [2].The increasing demand for shipping and packaging greatly increases the amount of waste produced as well as GHG emissions with increased activity in supply lines. Out of concern of spreading the virus through the plastic surfaces in recycling centers, some cities stopped their recycling programs in the U.S. [2]. Additionally, in some of these cities, citizens are not allowed to use reusable bags at grocery stores. Similarly, some European cities have seen restrictions within waste management. Italy has prohibited infected residents from sorting their personal household waste [2]. Industries have repealed the disposable bag bans, many have switched to single-use packaging, and online food ordering has increased in popularity [2]. The consumption of single use plastics has skyrocketed to limit transmission [1, 3]. Suspension of sustainable waste management practices potentially escalate environmental pollution. 

 

Medical wastes generated during COVID-19 pandemic in the environment.

Fig. 3. Saadat, Saeida., et al. “Environmental Perspective of COVID-19.” Science of The Total Environment, vol. 728, 22 April 2020, p. 138870., doi:10.1016/j.scitptenv.2020.139815.

 

Where Do We Go From Here? 

Over the last few months, people were enamoured by the modern-day pollution that vanished before their eyes. Strict stay-at-home orders decreased the amount of air and water pollution in otherwise unhealthy cities. Contrarily, the considerable increase in single use plastics may have a lasting negative impact on the environment. Although these outcomes may be hard to compare in magnitude, they help put into perspective the larger picture. Short term change is not a sustainable way to clean up the environment especially when it occurs alongside economic devastation. Before the pandemic, individual action against climate change felt like an abstract idea, out of reach due to its lack of immediacy. Now, the world has seen changes to our environment from worldwide behavior. Visible skies and vibrant waterways are distinguishable changes that are legitimate grounds to build momentum and take action for a healthier future. Although the pandemic may not have a drastic impact on the future of the environment itself due to conflicting effects, it can instigate discussion to improve personal actions that impact the environment. Long-term structural change and individual behavior changes are critical in combating environmental pollution. Moving forward, it is imperative that the unification of collective conscious behavior be a driving force to combat climate change. If neglected, climate change is likely to take many lives in the future, portraying this pandemic as a minor devastation. Let the urgency of our united global response to COVID-19 influence our future response to the next global crisis; climate change. 

 

References

[1] Saadat, Saeida., et al. “Environmental Perspective of COVID-19.” Science of The Total Environment, vol. 728, 22 April 2020, p. 138870., doi:10.1016/j.scitptenv.2020.139815. 

[3] Wang, Qiang, and Min Su. “A Preliminary Assessment of the Impact of COVID-19 on Environment – A Case Study of China.” Science of The Total Environment, vol. 728, 22 Apr. 2020, p. 138915., doi:10.1016/j.scitotenv.2020.138915.

[2] Zambrano-Monserrate, Manuel A., et al. “Indirect Effects of COVID-19 on the Environment.” Science of The Total Environment, vol. 728, 20 Apr. 2020, p. 138813., doi:10.1016/j.scitotenv.2020.138813.

The Parable of the Passenger Pigeon: How Colonizers’ Words Killed the World’s Largest Bird Population

By Jenna Turpin, Wildlife, Fish, and Conservation Biology ‘22

Author’s Note: I started this piece as an assignment for my undergraduate expository writing class under the guidance of my supportive professor Hillary Cheramie. Hillary urged me to take my writing beyond her course. In May, I had the wonderful opportunity to share this research at the 2020 UC Davis Annual Undergraduate Research Conference. I want to continue to share my work through publication. I wrote this piece with the intention of inspiring both students and teachers. From this paper, students can learn the parable of the passenger pigeon and teachers can come to understand why teaching about the passenger pigeon matters.

I learned of the passenger pigeon during my first week of college at UC Davis. One of my professors, Dr. Kelt, explained a brief history of the passenger pigeon to my first-year wildlife ecology and conservation class. The lesson was about wildlife-human interactions and the destruction humans can execute on the environment. The passenger pigeon’s story shook me to my core. It was a disturbing portrayal of how people sometimes negatively shape ecosystems. For me, it reinforced all of the reasons I decided to study wildlife conservation. I want people who read this piece to feel the emotions I felt when I first took in the parable of the passenger pigeon and come to the belief that humans have a responsibility to conserve species through management, policy, and education. The more people who hear this parable, the more people who hold sympathy for our wildlife. It should be built into schools’ science and history curriculums. A greater understanding of the passenger pigeon will save future species from extinction.

 

Abstract

Genre is the literary process through which people collectively communicate about a topic. Applied to a species, genre helps us understand how society communicates about that animal. Species’ genres change over time as different people interact with them. This influences human-wildlife interactions and thus plays a critical role in determining the fate of that species.  In the passenger pigeon’s (Ectopistes migratorius) prime, it was the most abundant bird species in existence but went extinct. The dynamics of human-wildlife interactions over time defined the progression of the passenger pigeon’s recorded history. These interactions varied based on how the dominant people in North America thought about the bird and the genre surrounding its existence. The parable of the passenger pigeon is a poignant example of why genre matters in preserving species and how this can go wrong. The analysis of the historical evolution of the passenger pigeon’s genre showed that the European colonization of North America is why these birds went extinct. I conducted a survey that showed that the passenger pigeon’s genre is fading among young people. Failing to spread the parable of the passenger pigeon is a threat to every currently endangered species and their respective genres.

 

Introduction

The passenger pigeon (Ectopistes migratorius) lived in North America and was described as having a “small head and neck, long tail, and beautiful plumage” [1]. In its prime, it had the largest population size of any bird species at the time but went extinct due to overexploitation and habitat loss caused by European settlers [1]. The dynamics of human-wildlife interactions over time defined the progression of the passenger pigeon’s recorded history. 

These interactions varied based on how the dominant people in North America thought about the bird and the genre surrounding its existence. Genre refers to “repeating rhetorical situations” to aid human interaction. In other words, it is the collection of how people refer to a specific topic. The definition of genre can be applied broadly. Genres are dynamic and develop over time, as people face new situations to apply them to. Every species has its own genre surrounding its existence. People participate in many genres on a daily basis, even if they do not know it. Genre is a “social action,” people shape genres and genres shape people [2].  The way groups of people collectively feel about anything is communicated through language. Thus, looking at the way people talked about passenger pigeons explains the processes that led to their downfall. The passenger pigeon is an effective ambassador for teaching youth about conservation because of the population’ rapid decline.

 

Historical Evolution

Indigenous People 

The passenger pigeon’s parable begins with Indigenous people who lived within the range of the bird, mostly covering only the Eastern half of America [1]. These Indigenous people were the first humans to interact with the passenger pigeon and create its genre. Simon Pokagon, a Potawatomi tribe member was interviewed about seeing them in flight: “When a young man I have stood for hours admiring the movements of these birds. I have seen them fly in unbroken lines from the horizon…” [3]. Under Indigenous peoples’ care, the passenger pigeon numbers rose beginning in the years 100 to 900 C.E. [4]. This was because Indigenous people and passenger pigeons had a well-balanced relationship that allowed both populations to thrive.

Indigenous people carefully interacted with the passenger pigeon because it was an important game bird to them, second only to the wild turkey [1]. The passenger pigeon was a staple food of the Seneca, who named the bird jah’gowa, meaning “big bread” [4]. Tribes followed specific procedures for hunting the birds. Almost all tribes had a strict policybased in both religion and biologyagainst taking nesting adult passenger pigeons. This strategic wildlife management policy promoted chick survival by allowing parents to care for their young. The Sioux and the Iroquois League were among those known to enforce their rules on other hunters. Instead of hunting the birds during this time, they often used nests as an opportunity to closely observe the bird. Individual tribes also had additional policies. For the Ho Chunks, hunting of the passenger pigeon could only happen if the chief held a feast. When the birds returned in spring, they offered much needed seasonal food. Before the Seneca began the hunt, they monitored the nests until the chicks were two to three weeks old. The Seneca even went as far as managing the habitat of the passenger pigeon, for instance they did not allow the cutting of any tree a “chief” pigeon nested in [4]. Chief Pokagon of the Potawatomi tribe credits strategies such as this for not only allowing the pigeon to maintain its numbers but actually increasing them [1]. By thinking about the needs of the pigeons and adjusting behaviors to accommodate those needs rather than freely hunting them, the population was able to continue on as a reliable food resource for the tribes that used them.

Furthermore, the connection between the two species went beyond the typical predator and prey relationship. To many Indigenous people, the pigeons were not just food, they were a being. Passenger pigeons were included in the religion of some tribes through stories, song, and dance [1]. The Seneca believed that the pigeon gave its body to create their children. The passenger pigeon was so important to the Seneca that they termed albino ones “chief of all pigeons” and strictly forbade hunting them. The Cherokee and the Neutrals told similar stories of the bird as a guide to avoid starvation. The Seneca and the Iroquois opened their Maple Festival every year with a dance song about the bird. The Cherokee Green Corn Festival featured a dance mimicking a pigeon hawk in pursuit of a pigeon [4]. The pigeons held value in the lives of the people who benefited from them.

 

European Arrival

The Europeans recorded their first passenger pigeon on July 1, 1534 [1]. Right away, colonizers of every walk of life made note of the massive number of pigeons Indigenous people had maintained.  The average European enjoyed the sight, “…I was perfectly amazed to behold the air filled and the sun obscured by millions of pigeons…” [1]. Many accounts told the narrative of an undiminishable population. Schorger, a professional ornithologist confirmed this notion, stating that “no other species of bird, to the best of our knowledge, ever approached the passenger pigeon in numbers” [1]. More ornithologists like Alexander Wilson took records, “In the autumn of 1813…I observed the pigeons flying from northeast to southwest, in greater numbers than I thought I had ever seen them before…The light of the noon day was obscured as by an eclipse” [5]. Even Leopold described them as a “biological storm” that used the resources of the land to their advantage [6].

While everyone knew the birds to be copious, not everyone understood the science behind it. Ornithologists knew that the pigeons could thrive because they had ample food and habitat when the Europeans arrived. However, for the vast majority of Europeans who were not trained in biology, the flock of birds blocking out the sky was frightening and unexplainable. This is where the genre began to separate itself from Indigenous peoples’ understanding of the bird. Europeans constructed urban legends in an effort to explain what was unknown to them. When only one acorn was found in pigeons’ crop (food storage pouch), Europeans predicted death and sickness. The evidence they saw supported their beliefs, “It is a common observation in some parts of this state, that when the Pigeons continue with us all the winter, we shall have a sickly summer and autumn” [1].  

 

Diminishing

As colonizers made themselves more at home in North America, they encroached on passenger pigeon habitat and depleted their numbers. The colonizers did not take wildlife management into consideration while hunting. Instead, they killed far more than they took and failed to leave young and nesting birds alone [1]. Extinction seemed entirely impossible, they did not see a need to ensure the next generation of pigeons could continue, it was understood as a given. To compound this, the birds were generally not thought of highly in European cultures. The passenger pigeon was merely a thing to exploit rather than a being to feel for. They began to disappear from the places humans occupied, retreating into what wilderness remained [3]. 

People began to notice that the passenger pigeon populations were fading. Some states, like Ohio, actively avoided policies to protect the species, claiming “the passenger pigeon needs no protection. Wonderfully prolific, having the vast forests of the North as its breeding grounds, travelling hundreds of miles in search of food, it is here to-day, and elsewhere to-morrow, and no ordinary destruction can lessen them or be missed from the myriad that are yearly produced” [1]. Other states, particularly Wisconsin, wrote laws to protect the species: “It shall be unlawful for any person or persons to use any gun or guns or firearms, or in any manner to main, kill, destroy, or disturb any wild pigeon or pigeons at or within three miles of the place or places where they are gathered for the purpose of brooding their young, known as pigeon nestings”. Laws along these lines were enacted in several states but no efforts were made to actually enforce them. Much of this was due to pushback by settlers to the laws. Farmers in particular protested any enforcement, worrying that allowing the pigeons to thrive would mean crop destruction [1]. To them, the pigeons were pests to get rid of, not preserve.

 

Gone

The last passenger pigeon, named Martha, was a resident of the Cincinnati Zoo up until her passing on September 1, 1914. She was named after First Lady Martha Washington and was housed with her companion named George. The pair never produced fertile eggs, the zoo’s captive breeding effort was too late to save the population [1]. The end of the species “removed more individual birds than did all the other 129 [previously recorded bird] extinctions put together”. They went extinct because of introduced species, chains of extinction, overexploitation, and habitat loss—all four of these were human-driven factors. Captive breeding, regulating hunting, and habitat protection could have saved them. However, these efforts were seldom made and not done early enough in the population’s decline [3]. The passenger pigeon was lost because of the genre Europeans created for it during the time it was still around.

The majority of non-indigenous Americans only appreciated the passenger pigeon and shifted their genre once they were no longer around. People now found a soft spot for the birds in their memories, “Alas, the pigeons and the frosty morning hunts and the delectable pigeon-pie are gone, no more return”. Artists incorporated these fond memories into their paintings, poems, and music. Monuments were erected around the United States inscribed with laments such as “this species became extinct through the avarice and thoughtlessness of Man” and “the conservationist’s voice was heard too late” [3]. People regretted the fact that future generations would not get to see the bird in the sky so they attempted to etch the passenger pigeon into everyone’s minds [6]. 

Making an effort to remember the passenger pigeon is important because the species’ story functions as a lesson and a guide for the future. However, in the past decade, the passenger pigeon is being forgotten. Many high school students are not taught about the population’s time on Earth and why they are now gone, as shown by a case study in Pennsylvania [7]. If people are no longer talking about it then the same mistake will be made again. At the same time, there are also organizations, like the Project Passenger Pigeon founded in 2014, working to tell the parable “through a documentary film, a new book, their website, social media, curricula, and a wide range of exhibits and programming for people of all ages” [8]. However, a small group of thoughtful individuals will not be enough to save the next species from human destruction if the story of the passenger pigeon does not make it into enough of the right hands.

 

Experiment

Over the period of one month during March 2019, I surveyed teenagers regarding their passenger pigeon knowledge. At the time of the survey, the teenagers were high school or college students in the United States. The overall purpose of my survey was to investigate if young people are talking about the passenger pigeon in contemporary society. Of the fifty-one responses, three (6%) subjects spoke about the passenger pigeon accurately. Furthermore, eight (16%) subjects believed to know the true story of the passenger pigeon, all of those eight falsely stated that the passenger pigeon was used to carry messages. Eight (16%) subjects even claimed to have seen a live passenger pigeon since 2000. 

My survey found that very few teenagers have heard the parable of the passenger pigeon’s extinction. This group of people has gone through a large amount of schooling in their lives so far without being taught about the passenger pigeon despite its intertwining with significant historical events. The convenient fact about passenger pigeons is that they feel familiar to people since most have seen today’s common pigeon, the rock dove. It is easy for the uninitiated to imagine what a passenger pigeon was like based off of what they know about rock doves. The parable of the passenger pigeons can be taught in any classroom—science, history, art, and more. 

The experiment shows that the genre is not being passed on. This is exactly why the way contemporary society talks about this species and its genre matters. Education that advocates for proper wildlife management and policy is the key to saving species from extinction. 

 

Conclusion

The passenger pigeon species went from the world’s largest bird population to complete extinction, due to mistreatment from European colonizers. My survey of high school teenagers shows that people are not learning from this parable. This species makes itself an ideal candidate because of the rapid severity of its decline. It is increasingly important that we care about our environment before it is too late to take action. The clock is ticking, the passenger pigeon told us so. If we can learn to mourn a bird we never met, we will not have the opportunity to mourn the birds we know.

 

Works Cited

  1. Schorger, A.W. The Passenger Pigeon: Its Natural History and Extinction. Wisconsin, University of Wisconsin Press, 1955.
  2. Dirk, Kerry. “Navigating Genres”. Writing Spaces: Readings on Writing, edited by Charles Lowe and Pavel Zemliansky, vol. 1, Parlor Press, 2010.
  3. Avery, Mark. A Message from Martha. London, Bloomsbury Publishing, 2014.
  4. Greenberg, Joel. A Feathered River Across the Sky: The Passenger Pigeon’s Flight to Extinction. New York, Bloomsbury USA, 2014.
  5. Wilson, Alexander. Wilson’s American Ornithology. Boston, Otis Broader and Company, 1853.
  6. Leopold, Aldo. A Sand County Almanac. New York, Oxford University Press, 1954.
  7. Soll, David. “Resurrecting the Story of the Passenger Pigeon in Pennsylvania.” Pennsylvania History: A Journal of Mid-Atlantic Studies, vol. 79, no. 4, 2012, pp. 507–519. JSTOR, www.jstor.org/stable/10.5325/pennhistory.79.4.0507.
  8. Project Passenger Pigeon. The Chicago Academy of Sciences and its Peggy Notebaert Nature Museum, 2012, passengerpigeon.org. Accessed 19 February 2019.

Potential Methods of Life Detection on Ocean Worlds

By Ana Menchaca, Biochemistry and Molecular Biology ‘20

Author’s Note: As a biochemistry major who is interested in pursuing astrobiology research, I initially wrote this literature review for an assignment in my Writing in Biology course. Methods of life detection and what we know about life is a field in which we still have much to discover and explore, given Earth as our only example, and I hope to be involved in this exploration myself in the future.

 

Abstract

Ocean worlds, such as Enceladus, Saturn’s largest moon, provide intriguing environments and the potential for life as we continue to explore the Solar System. Organic compounds have been discovered in plumes erupting from the moon during flybys and point towards the presence of amino acids and other precursors of life. The data collected from these flybys, in turn, has been used to calculate the theoretical amounts of amino acids present in the oceans of Enceladus. While this data is intriguing, it relies on a limited definition of life, based on organisms and macromolecules that have only been observed on Earth. Other methods, including using nucleic acids or nanopores for detection, have been proposed. Nucleic acids utilize binding to identify a broad spectrum of compounds, while nanopores utilize the measurement of ionic flow. These alternative methods allow for a broader spectrum of compound detection than terran-based methods, creating the potential to detect unfamiliar kinds of life. Research into more holistic detection should continue.

Keywords: astrobiology, life detection, planetary exploration, biosignatures

 

Introduction

The search for life elsewhere in the Solar System is becoming increasingly relevant, and more importantly, feasible. Icy moons, such as Europa, Titan, and Enceladus, have been identified as holding the greatest potential for extraterrestrial life within the Solar System [1]. Europa and Enceladus, with seas below icy crusts, have geysers with unidentified fluctuations, along with evidence of tidal warming and geologic activity [2]. The Cassini spacecraft identified these geysers on Enceladus during flyby in 2006, spouting from four specific fractures on the surface of the moon [3]. Analysis of the vapors produced show that they mainly consist of water, along with CO2, N2, CO, CH4, salts, other organic compounds, and silica particulates [3, 4]. This points towards evidence of hydrothermal activity, the movement of heated water, which has the potential to provide necessary energy for life [4]. Additionally, the discovery of volatile aliphatic hydrocarbons in these plumes potentially indicate some degree of organic evolution within the seas of Enceladus [4].

However, there is no consensus yet on how to detect and identify life [1, 2]. Some scientists propose looking for life based on the shared ancestry hypothesis, which proposes all life shares the same genetic ancestry [2]. Others propose there is a potential for extraterrestrial life to present variations from terran life that we may neither be able to recognize nor detect with our current methods of biochemical detection [5]. Experimentally, the potential for nucleic acids based on different backbones has already been identified [5]. Here, we examine the range of proposed methods for identifying extraterrestrial life. 

 

Proposed theories and methods based on current knowledge of life

Collection of amino acids

Current data collected from Enceladus’ plumes presents organic compounds that provide potential evidence of amino acid synthesis taking place in the oceans of the moon. Steel et al. used the thermal flux at the moon’s South Polar Terrain (SPT) to predict the hydrogen produced by hydrothermal activity. The predicted rates of production ranged between 0.63 and 33.8 mol/s of H2, and from there, amino acid production rates were estimated to be between 8.4 and 449.4 mmol/s [4]. Annual biomass production was also modelled in these calculations and estimated at 4 · 10to 2 · 106 kg/year, compared to 1014 kg/year on Earth. These estimates, however, are dependent on the environment being an abiotic, steady state ocean; the actual production rates could be different if there is life present in Enceladus’ ocean [4]. 

While this limits our predictions of Enceladus’ true environment, it still provides a basis that can be extrapolated for use in the design of modules to be sent out. One such module that has been proposed is the Enceladus Organic Analyzer, which is designed to analyze amino acids through chain length variations [3]. To properly collect and analyze the amino acids proposed to be in Enceladus’ oceans, there are several requirements. The sample must be collected from the subsurface ocean with minimal degradation, isomerization, racimerization, and contamination of biological molecules and amino acids [3]. A collection chamber made of aluminum has been modeled, designed to reduce the thermal heating caused by collection of samples, in order to best preserve them. If the moon contains bacteria as postulated, this design will lyse and kill collected cells through either heat or shock but release their more stable chemical components for analysis [3]. This depends highly upon current knowledge as a starting place, focusing with a limited scope on amino acid and cell identification. Another such method using cell identification is digital holographic microscopy. 

Digital holographic microscopy

The development and improvements of microscopy, while beneficial, depend heavily on the assumption that life in the same form as terran cells will be found. Investigators propose digital holographic microscopy (DHM) as a more efficient alternative over traditional light microscopy [1]. This technology produces a 100-fold improvement in the depth of field and is able to monitor both intensity and phase of images. However, even with the increase in resolution, differentiation of cells and cell-shaped structures is difficult, even before taking into account potential differences in extraterrestrial life. Refraction, an emerging field, was able to differentiate experimentally between crystalline structures and cells in the study’s Arctic samples. While the technology can be miniaturized and discriminate between cells and minerals, it depends highly on actual capture of a sufficient number of cells from plumes. This experimental data was obtained using dye-less techniques, which still function in the context of organisms without DNA or RNA, and refraction with the potential to differentiate structures [1]. DHM is both useful for detection of cells based on collected data and for the potential discovery of organisms without nucleic acids as we know them. 

 

Expanding outside the current knowledge of life

Detection using nucleic acids
Other experiments and proposals, while not explicitly targeting life outside the current perceptions, propose a more holistic collection of data. This carries the potential of identifying life outside our current scope, as opposed to focusing directly on known amino acids and cells. Using a broader concept of nucleic acids as a means of detection and identification is one such method. 

Oligonucleotides, through forming secondary and tertiary structures, have specificity and affinity to a wide variety of molecules, both organic and inorganic [2]. Even at a length of only 15 base pairs and within complex mixtures, these molecules can bind to what is being analyzed, or the analytes. Systematic evolution of ligands by exponential enrichment (SELEX) is a process that can identify oligonucleotides that bind very specifically to analytes. However, this method proposes the use of low affinity and low specificity nucleic acids that are typically discarded in this process. Unlike antibodies, this method requires no prior knowledge of the surface attributes or the three-dimensional structure of the molecule that is being bound. Through accumulating a wide range of binding sequences and statistical analysis, a vast number of compounds can be collected and environmental variations identified. Additionally, this method posits that the optimal means of capturing sequences is through proximity ligation assay (PLA), a technique currently used in scientific fields. PLA purifies the binding species based on ligation and amplification, producing a lower background than sieving, which separates based on size. It is also capable of capturing a vast range of sequences and structures, including inorganic, organic, or polymeric molecules [2], and thus is more capable of providing holistic results.

 

Nanopore-based sensing

Nanopore-based sensing, presented as an alternative to current methods, detects and analyzes genetic information carriers in watery systems without making assumptions about its chemical composition [5]. This system relies upon the restrictions placed on these sorts of compounds within watery systems, as the repeating charge of backbones keeps polymer strands from folding and favors solubility in water. A nanopore is a hole with a diameter of a few nanometers, surrounded by an insulating membrane within two chambers containing an electrolyte solution. Due to its diameter, only single-stranded DNA can pass through the nanopore, allowing for slow movement and characteristic signals that produce data clearly distinguishable from other molecular data. This method can detect and analyze molecules by measuring the ionic flow across the membrane. While biological nanopores are able to detect and resolve individual terran bases, nonbiological, solid-state nanopores provide the same function, avoiding the limitations of detecting terran molecules that may be present in biological nanopores. Graphene, with its crystalline form, can have its membrane adjusted to only accommodate one nucleotide at a time or can be sculpted to produce varying sizes of nanopores. This could allow for the detection of other polymers with chemical and sterical properties that vary from currently known polymers. This approach has the potential to analyze a broad range of molecules without any assumptions regarding the external structure’s outside charge and linearity. Few identified nonbiological polymers are structured this way, so any data picked up by nanopores would be significant [5].

There are, however, limitations to this approach. Nucleic acids have high electroporation speeds, making it necessary to find methods of slowing these speeds down for accuracy [5]. Electroporation uses an electrical charge to make the cell membrane more permeable. Potential methods include control through physical factors, such as temperature, salinity, and viscosity. Conditions of collection on other planets also pose the problem of extreme dilution of the target molecules, which depends on a large number of variables [5]. 

 

Conclusion

Radiation and stability are major concerns in moving forward with any sort of data collection from extraterrestrial worlds. Mechanisms and samples are potentially open to the detrimental effects of extreme vacuum and solar radiation [5]. These problems should be addressed in conjunction with the technology actually being used for analysis to produce the most beneficial results. Some of these issues have been addressed to some extent, such as using microfluidics for collection because they are unaffected by the vacuum of space due to their own internal surface tension [3]. However, these problems need to be explored further in all cases to ensure that each method can function in uncontrolled or non-terran environments.

The presented data indicates potential for the existence of amino acids in these environments. Even though prediction and detection of these amino acids seems a logical step forward, the development of further, broader technology for life detection should also be pursued. Current knowledge is limited by the qualities of terran life; while that is a well-supported starting point, methods that leave open the potential of deviation from this point may allow for the detection of otherwise overlooked forms of life. Moving forward, it seems only logical to combine these methods that can detect both the known and the unknown, allowing scientists to gather the widest possible array of data in future missions, especially on promising worlds like Enceladus. 

 

References

  1. Bedrossian M, Lindensmith C, Nadeau JL. 2016. Digital Holographic Microscopy, a Method for Detection of Microorganisms in Plume Samples from Enceladus and Other Icy Worlds. Astrobiology 17(9):913–925.
  2. Johnson SS, Anslyn EV, Graham HV, Mahaffy PR, Ellington AD. 2018. Fingerprinting Non-Terran Biosignatures. Astrobiology 18(7):915–922.
  3. Mathies RA, Razu ME, Kim J, Stockton AM, Turin P, Butterworth A. 2016. Feasibility of Detecting Bioorganic Compounds in Enceladus Plumes with the Enceladus Organic Analyzer. Astrobiology 17(9):902–912.
  4. Steel EL, Davila A, Mckay CP. 2017. Abiotic and Biotic Formation of Amino Acids in the Enceladus Ocean. Astrobiology 17(9):862–875. 
  5. Rezzonico F. 2014. Nanopore-Based Instruments as Biosensors for Future Planetary Missions. Astrobiology 14(4):344–351.

The Scientific Cost of Progression: CAR-T Cell Therapy

By Picasso Vasquez, Genetics and Genomics ‘20

Author’s Note: One of the main goals for my upper division UWP class was to write about a recent scientific discovery. I decided to write about CAR-T cell therapy because this summer I interned at a pharmaceutical company and worked on a project that involved using machine learning to optimize the CAR-T manufacturing process. I think readers would benefit from this article because it talks about a recent development in cancer therapy.

 

“There’s no precedent for this in cancer medicine.” Dr. Carl June is the director of the Center for Cellular Immunotherapies and the director of the Parker Institute for Cancer Immunotherapy at the University of Pennsylvania. June and his colleagues were the first to use CAR-T, which has since revolutionized personal cancer immunotherapy [1]. “They were like modern-day Lazarus cases,” said Dr. June, referencing the resurrection of Saint Lazarus in the Gospel of John and how it parallels the first two patients to receive CAR-T.  CAR-T, or chimeric antigen receptor T-cell, is a novel cancer immunotherapy that uses a person’s own immune system to fight off cancerous cells existing within their body [1].

Last summer, I had the opportunity to venture across the country from Davis, California, to Springhouse, Pennsylvania, where I worked for 12 weeks as a computational biologist. One of the projects I worked on was using machine learning models to improve upon the manufacturing process of CAR-T, with the goal of reducing the cost of the therapy. The manufacturing process begins when T-cells are collected from the hospitalized patient through a process called leukapheresis. In this process, the T-cells are frozen and shipped to the manufacturing facility, such as the one I worked at this summer, where they are then grown up in large bioreactors. On day three, the T-cells are genetically engineered to be selective towards the patient’s cancer by the addition of the chimeric antigen receptor; this process turns the T-cells into CAR-T cells [2]. For the next seven days, the bioengineered T-cells continue to grow and multiply in the bioreactor. On day 10, the T-cells are frozen and shipped back to the hospital where they are injected back into the patient. Over the 10 days prior to receiving the CAR-T cells, the patient is given chemotherapy to prepare their body for inoculation of the immunotherapy [2]. This whole process is very expensive and as Dr. June put it in his TedMed talk, “it can cost up to 150,000 dollars to make the CAR-T cells for each patient.” But the cost does not stop there; when you include the cost of treating other complications, the cost “can reach one million dollars per patient” [1].

The biggest problem with fighting cancer is that cancer cells are the result of normal cells in your body gone wrong. Because cancer cells look so similar to the normal cells, the human body’s natural immune system, which consists of B and T-cells, is unable to discern the difference between them and will be unable to fight off the cancer. The concept underlying CAR-T is to isolate a patient’s T-cells and genetically engineer them to express a protein, called a receptor, that can directly recognize and target the cancer cells [2]. The inclusion of the genetically modified receptor allows the newly created CAR-T cells to bind cancer cells by finding the conjugate antigen to the newly added receptor. Once the bond between receptor and antigen has been formed, the CAR-T cells become cytotoxic and release small molecules that signal the cancer cell to begin apoptosis [3]. Although there has always been drugs that help your body’s T-cells fight cancer, CAR-T breaks the mold by showing great efficacy and selectivity. Dr. June stated “27 out of 30 patients, the first 30 we treated, or 90 percent, had a complete remission after CAR-T cells.” He then goes on to say, “companies often declare success in a cancer trial if 15 percent of the patients had a complete response rate” [1].

As amazing as the results of CAR-T have been, this wonderful success did not happen overnight. According to Dr. June, “CAR T-cell therapies came to us after a 30-year journey, along with a road full of setbacks and surprises.” One of these setbacks is the side effects that result from the delivery of CAR-T cells. When T-cells find their corresponding antigen, in this case the receptor on the cancer cells, they begin to multiply and proliferate at very high levels. For patients who have received the therapy, this is a good sign because the increase in T-cells indicates that the therapy is working. When T-cells rapidly proliferate, they produce molecules called cytokines. Cytokines are small signaling proteins that guide other cells around them on what to do. During CAR-T, the T cells rapidly produce a cytokine called IL-6, or interleukin-6, which induces inflammation, fever, and even organ failure when produced in high amounts [3].

According to Dr. June, the first patient to receive CAR-T had “weeks to live and … already paid for his funeral.”  When he was infused with CAR-T, the patient had a high fever and fell comatose for 28 days [1]. When he awoke from his coma, he was examined by doctors and they found that his leukemia had been completely eliminated from his body, meaning that CAR-T had worked. Dr. June reported that “the CAR-T cells had attacked the leukemia … and had dissolved between 2.9 and 7.7 pounds of tumor” [1].

Although the first patients had outstanding success, the doctors still did not know what caused the fevers and organ failures. It was not until the first child to receive CAR-T went through the treatment did they discover the cause of the adverse reaction. Emily Whitehead, at six years old, was the first child to be enrolled in the CAR-T clinical trial [1]. Emily was diagnosed with acute lymphoblastic leukemia (ALL), an advanced, incurable form of leukemia. After she received the infusion of CAR-T, she experienced the same symptoms of the prior patient. “By day three, she was comatose and on life support for kidney failure, lung failure, and coma. Her fever was as high as 106 degrees Fahrenheit for three days. And we didn’t know what was causing those fevers” [1]. While running tests on Emily, the doctors found that there was an upregulation of IL-6 in her blood. Dr. June suggested that they administer Tocilizumab to combat increased IL-6 levels. After contacting Emily’s parents and the review board, Emily was given Tocilizumab and “Within hours after treatment with Tocilizumab, Emily began to improve very rapidly. Twenty-three days after her treatment, she was declared cancer-free. And today, she’s 12 years old and still in remission” [1]. Currently, two versions of CAR-T have been approved by the FDA, Yescarta and Kymriah, which treat diffuse large B-cell lymphoma (DLBCL) and acute lymphoblastic leukemia (ALL) respectively [1].       

The whole process is very stressful and time sensitive. This long manufacturing task results in the million-dollar price tag on CAR-T and is why only patients in the worst medical states can receive CAR-T [1]. However, as Dr. June states, “the cost of failure is even worse.” Despite the financial cost and difficult manufacturing process, CAR-T has elevated cancer therapy to a new level and set a new standard of care. However, there is still much work to be done. The current CAR-T drugs have only been shown to be effective against liquid based cancers such as lymphomas and non-effective against solid tumor cancers [4]. Regardless, research into improving the process of CAR-T continues to be done both at the academic level and the industrial level.

 

References:

  1. June, Carl. “A ‘living drug’ that could change the way we treat cancer.” TEDMED, Nov. 2018, ted.com/talks/carl_june_a_living_drug_that_could_change_the_way_we_treat_cancer.
  2. Tyagarajan S, Spencer T, Smith J. 2019. Optimizing CAR-T Cell Manufacturing Processes during Pivotal Clinical Trials. Mol Ther. 16: 136-144.
  3. Maude SL, Laetch TW, Buechner J, et al. 2018. Tisagenlecleucel in Children and Young Adults with B-Cell Lymphoblastic Leukemia. N Engl J Med. 378: 439-448.
  4. O’Rourke DM, Nasrallah MP, Desai A, et al. 2017. A single dose of peripherally infused EGFRvIII-directed CAR T cells mediates antigen loss and induces adaptive resistance in patients with recurrent glioblastoma. Sci Transl Med. 9: 399.