Home » Biology (Page 2)
Category Archives: Biology
Feeding 8 Billion People: Engineering Crops for Climate Resiliency
By Shaina Eagle, Global Disease Biology ’24
Feeding the world’s 8 billion– and growing– people [2] is an Augean task that requires cooperation between farmers, scientists, government agencies, and industry stakeholders across the globe. Agriculture and climate are deeply intertwined and climate conditions play a critical role in determining agricultural productivity and have a significant impact on global food security. The climate crisis poses immense challenges to food security and the farmers whose livelihoods depend on crop production. As the consequences of the climate crisis increase and intensify, developing resilient agricultural systems is essential to ensuring that our food and those who grow it can adapt without further depleting carbon and water resources.
Climate-smart agriculture identifies technologies that can best respond to the impacts of climate change, such as increasing temperatures and heat waves, changing rainfall patterns, severe storms, drought, and wildfires that adversely affect crop yield and quality [1]. Agronomists, plant biologists, and farmers are working to develop crops that will increase sustainable production and better withstand a changing climate via various genetic techniques.
Clonal Seeds
A team including a UC Davis Assistant Professor of Plant Sciences, Imtiyaz Khanday, genetically engineered rice seeds that reproduce clonally, or without sexual reproduction, in order to maintain the desirable traits found in the F1 generation (Vernet et al. 2022). They developed a breeding technique that allows for high-frequency production– or the ability to produce a large quantity in a short amount of time in a cost-effective manner– of hybrid rice using synthetic apomixis. Apomixis, a type of asexual reproduction in plants, allows for the production of seeds without fertilization, which can be useful in hybrid breeding programs. The study used CRISPR/Cas9 gene editing to introduce mutations in the genes responsible for sexual reproduction in rice. These seeds were planted and produced the F1 generation of plants, which were genetically stable and had high yield potential. Subsequent generations were clonally propagated from the F1 plants. In agriculture, high-frequency production has the ability to produce a large number of crops or seeds using advanced breeding techniques. High-frequency production is important for meeting the increasing demand for food and other agricultural products, as well as for improving the efficiency and profitability of farming operations.
The study suggests that this technique could be a valuable tool for plant breeders to produce high-quality hybrid rice seeds with more efficient and cost-effective methods. Clonal propagation can help maintain desirable traits as the climate crisis threatens agriculture, such as disease resistance, yield potential, or drought tolerance that might otherwise be lost through sexual reproduction. It is a faster alternative to sexual reproduction methods such as cross-breeding, which can take several generations and require extensive testing to identify desirable traits.
De Novo Domestication
De novo is a Latin term that means “from the beginning” or “anew”. In the context of genetics and plant breeding, de novo refers to the creation of something new or the starting point for the development of a new organism or trait. De novo domestication, for example, refers to the process of identifying and selecting wild plants with desirable traits and developing them into new crops that are better adapted to agricultural use. This approach differs from traditional domestication, which involves selecting and breeding plants that have already been used by humans for thousands of years. Eckhardt et al. highlight the potential benefits of de novo domestication, including the creation of new crops that are better adapted to changing environmental conditions, and the conservation of genetic diversity by using previously unexploited wild species.
A study by Lemmon et al. (2021) aimed to create a domesticated tomato variety with desirable traits by introducing mutations into genes related to fruit size and shape via CRISPR-Cas9. While there are many tomato cultivars available, they often have limitations in terms of yield, quality, or other traits that are important for consumers and growers. Therefore, there is a need to develop new tomato varieties with improved characteristics, and the de novo domestication of a wild tomato variety using genome editing offers a potential solution to this challenge. The domesticated variety has several desirable traits, including larger fruit size, smoother fruit shape, reduced seed count, and prolonged fruit shelf life. Additionally, the domesticated tomato plants have increased branching and produced more fruit per plant compared to the wild-type tomato plants.
Kaul et al. (2022) conducted a de novo genome assembly of rice bean (Vigna umbellata), a nutritionally rich crop with potential for future domestication. The study revealed novel insights into the crop’s flowering potential, habit, and palatability, all of which are important traits for efficient domestication. Flowering potential refers to the crop’s ability to produce flowers, which is important for seed production and crop yield. Understanding the genetic basis of flowering potential can help breeders select plants that flower earlier or later, depending on their needs. Habit refers to the overall growth pattern of the plant, such as its height, branching, and leaf morphology. Understanding the genetic basis of habit can help breeders select for plants that are more suitable for specific growing conditions or cultivation methods. Palatability refers to the taste and nutritional value of the crop, which are important factors for its acceptance as a food source. Identifying genes involved in carbohydrate metabolism and stress response can help breeders develop crops with better nutritional value and resistance to environmental stressors. Overall, these traits are desirable because they can contribute to the development of a more productive, nutritious, and resilient crop. The researchers also identified genes involved in key pathways such as carbohydrate metabolism, plant growth and development, and stress response. Climate change is expected to have a significant impact on crop yields, water availability, and soil fertility. One NASA study found that maize yields may decrease by 24% by 2030 [3]. Understanding the genetic basis of stress response and carbohydrate metabolism can help breeders develop crops that are more resilient to environmental stressors, such as drought, heat, and pests. Furthermore, identifying genes involved in plant growth and development allows breeders to introduce desirable traits, such as earlier flowering or increased yield. This is important for domestication because it can help accelerate the process of crop improvement and make it easier to develop new varieties with desirable traits. Overall, the genes identified in the study provide a foundation for developing crops that are better adapted to changing environmental conditions and more suitable for cultivation, which is crucial for ensuring food security in the face of climate change.
Genetically enhancing common crops
Molero et al. (2023) identified exotic alleles (germplasm unadapted to the target environment) associated with heat tolerance in wheat through genomic analysis and conducted breeding experiments to develop new wheat with improved heat tolerance. The exotic alleles were obtained from wheat lines that originated from diverse regions around the world, including Africa, Asia, and South America. The identified alleles increased heat tolerance in wheat under field conditions, and the effect was consistent across multiple environments. The authors obtained these lines from the International Maize and Wheat Improvement Center (CIMMYT) and used genomic analysis to identify the specific exotic alleles associated with heat tolerance. These alleles were then incorporated into breeding programs to develop new wheat varieties with improved heat tolerance.
The authors used genomic analysis to identify these alleles, which had diverse functions, including regulating heat shock proteins, osmotic stress response, and photosynthesis. The study provides evidence that the use of multiple exotic alleles could lead to the development of wheat varieties with improved heat tolerance under field conditions. The authors crossed the heat-tolerant lines carrying the exotic alleles with local commercial varieties to develop new breeding populations. They then evaluated the heat tolerance of these populations under field conditions to identify the lines with improved heat tolerance. The selected lines were further evaluated in multiple environments to confirm their performance and stability. Heat tolerance was measured by exposing the plants to high temperatures under field conditions and evaluating their performance. Specifically, they conducted experiments in three different environments, including a dry and hot irrigated environment, a semi-arid rainfed environment, and a temperate irrigated environment, all of which are known to impose high-temperature stress on wheat. The authors evaluated multiple traits related to heat tolerance, including yield, plant height, spike length, and the number of spikes per plant.
They also measured physiological traits such as chlorophyll fluorescence, canopy temperature, and photosynthetic activity. By evaluating these traits, they were able to identify the wheat lines with improved heat tolerance. By combining both phenotypic and genomic analyses, they were able to identify the wheat lines and alleles with the greatest potential for improving heat tolerance in wheat under field conditions. This demonstrates the potential for the use of exotic alleles in plant breeding to improve crop performance and address the challenges of climate change.
Porch et al. (2020) report the release of a new tepary bean germplasm (seeds or plant parts that can be passed onto the next generation and are helpful in breeding efforts) called TARS-Tep 23, which exhibits broad abiotic stress tolerance, as well as resistance to rust and common bacterial blight. Tepary bean (Phaseolus acutifolius) is a drought-tolerant legume crop that is native to the southwestern United States and northern Mexico. Tepary beans are generally grown in arid and semi-arid regions of North America, including the Sonoran Desert, Chihuahuan Desert, and the Great Basin. They are also grown in parts of Central and South America. According to FAO statistics, the total world production of tepary beans in 2019 was around 4,000 metric tons. Rust and common bacterial blight are two diseases that can affect the growth and productivity of tepary beans. Rust is a fungal disease that causes orange or brown spots on the leaves and stems of plants, leading to reduced photosynthesis and yield loss. Common bacterial blight is a bacterial disease that can cause wilting, necrosis, and reduced yield in affected plants.
The researchers conducted field trials and laboratory experiments to evaluate the performance and traits of TARS-Tep 23 under different conditions. Laboratory experiments involved inoculating TARS-Tep 23 with rust and common bacterial blight pathogens, then comparing the performance and traits with other tepary beans under these conditions. Field trials were carried out under conditions such as normal rainfall, drought, and heat stress. The results showed that TARS-Tep 23 had higher yields and better growth under drought and heat stress compared to other tepary bean varieties. It also showed high resistance to rust and common bacterial blight. The release of TARS-Tep 23 provides a valuable resource for breeding programs and can contribute to enhancing the productivity and sustainability of tepary bean cultivation. Developing climate-resistant germplasm is a critical resource for crop improvement and biodiversity cultivation, and it is used by plant breeders and researchers to develop new varieties with desirable traits such as disease resistance, stress tolerance, and improved yield.
Conclusion
The urgent need to address the challenge of climate change and its impact on global food security cannot be overemphasized. The world is already experiencing food shortages due to the adverse effects of climate change, and this problem is likely to worsen in the future unless appropriate measures are taken. Significant strides are being made in the research and development of new agricultural and genetic technologies that can engineer crops for climate resiliency. These technologies offer hope for a more sustainable future by enhancing food production, increasing resilience to extreme weather conditions, and mitigating the impact of climate change. However, it is essential to recognize that research and development efforts should not only focus on genetic engineering but should also involve all levels of the food production process, including better management practices, more efficient use of resources, and improved supply chain management. Only by taking a comprehensive approach can we hope to achieve a sustainable and resilient food system that can withstand the challenges of climate change.
References
[1] Eckardt, Nancy A, Elizabeth A Ainsworth, Rajeev N Bahuguna, Martin R Broadley, Wolfgang Busch, Nicholas C Carpita, Gabriel Castrillo, et al. “Climate Change Challenges, Plant Science Solutions.” The Plant Cell 35, no. 1 (January 2, 2023): 24–66. https://doi.org/10.1093/plcell/koac303.
[2] Frayer, Lauren. “Earth Welcomes Its 8 Billionth Baby. Is That Good or Bad News… or a Bit of Both?” NPR, November 15, 2022, sec. Goats and Soda. https://www.npr.org/sections/goatsandsoda/2022/11/15/1136745637/earth-welcomes-its-8-billionth-baby-is-that-good-or-bad-news-or-a-bit-of-both.
[3] Gray, Ellen. NASA’s Earth Science News. “Global Climate Change Impact on Crops Expected Within 10 Years, NASA Study Finds.” Climate Change: Vital Signs of the Planet. Accessed May 30, 2023. https://climate.nasa.gov/news/3124/global-climate-change-impact-on-crops-expected-within-10-years-nasa-study-finds.
[4] Jägermeyr, Jonas, Christoph Müller, Alex C. Ruane, Joshua Elliott, Juraj Balkovic, Oscar Castillo, Babacar Faye, et al. “Climate Impacts on Global Agriculture Emerge Earlier in New Generation of Climate and Crop Models.” Nature Food 2, no. 11 (November 1, 2021): 873–85. https://doi.org/10.1038/s43016-021-00400-y.
[5] Jia, Huicong, Fang Chen, Chuanrong Zhang, Jinwei Dong, Enyu Du, and Lei Wang. “High Emissions Could Increase the Future Risk of Maize Drought in China by 60–70 %.” Science of The Total Environment 852 (December 2022): 158474. https://doi.org/10.1016/j.scitotenv.2022.158474.
[6] Liu, Weihang, Tao Ye, Jonas Jägermeyr, Christoph Müller, Shuo Chen, Xiaoyan Liu, and Peijun Shi. “Future Climate Change Significantly Alters Interannual Wheat Yield Variability over Half of Harvested Areas.” Environmental Research Letters 16, no. 9 (September 1, 2021): 094045. https://doi.org/10.1088/1748-9326/ac1fbb.
[7] McMillen, Michael S., Anthony A. Mahama, Julia Sibiya, Thomas Lübberstedt, and Walter P. Suza. “Improving Drought Tolerance in Maize: Tools and Techniques.” Frontiers in Genetics 13 (October 28, 2022): 1001001. https://doi.org/10.3389/fgene.2022.1001001.
[8] Molero, Gemma, Benedict Coombes, Ryan Joynson, Francisco Pinto, Francisco J. Piñera-Chávez, Carolina Rivera-Amado, Anthony Hall, and Matthew P. Reynolds. “Exotic Alleles Contribute to Heat Tolerance in Wheat under Field Conditions.” Communications Biology 6, no. 1 (January 9, 2023): 21. https://doi.org/10.1038/s42003-022-04325-5.
[9] Ozias-Akins, Peggy, and Joann A. Conner. “Clonal Reproduction through Seeds in Sight for Crops.” Trends in Genetics 36, no. 3 (March 2020): 215–26. https://doi.org/10.1016/j.tig.2019.12.006.
[10] Raphael Tiziani, Begoña Miras-Moreno, Antonino Malacrinò, Rosa Vescio, Luigi Lucini, Tanja Mimmo, Stefano Cesco, Agostino Sorgonà. “Drought, heat, and their combination impact the root exudation patterns and rhizosphere microbiome in maize roots.” Environmental and Experimental Botany, Volume 203, 105071. 2022. https://doi.org/10.1016/j.envexpbot.2022.105071.
[11] Underwood, Charles J., and Raphael Mercier. “Engineering Apomixis: Clonal Seeds Approaching the Fields.” Annual Review of Plant Biology 73, no. 1 (May 20, 2022): 201–25. https://doi.org/10.1146/annurev-arplant-102720-013958.
[12] Vernet, Aurore, Donaldo Meynard, Qichao Lian, Delphine Mieulet, Olivier Gibert, Matilda Bissah, Ronan Rivallan, et al. “High-Frequency Synthetic Apomixis in Hybrid Rice.” Nature Communications 13, no. 1 (December 27, 2022): 7963. https://doi.org/10.1038/s41467-022-35679-3.
[13] Yu, Chengzheng, Ruiqing Miao, and Madhu Khanna. “Maladaptation of U.S. Corn and Soybeans to a Changing Climate.” Scientific Reports 11, no. 1 (June 11, 2021): 12351. https://doi.org/10.1038/s41598-021-91192-5.
Tau Proteins for Early Diagnosis of Alzheimer’s Disease: A Literature Review
By Yoonah Kang, Neurobiology, Physiology, and Behavior ’24
Author Bio : I am a third year student studying Neurobiology, Physiology, and Behavior. I always enjoyed biology in middle school and high school. I became interested in neurobiology through the AP psychology class in high school because I really enjoyed the section about the biology behind psychological phenomena. This paper was originally written for the UWP 104F class, writing in health professions. I was interested in Alzheimer’s Disease because it is a disease that affects many people around the world, but there is still no cure/treatment for it. While reading articles about Alzheimer’s, I found out that the best course for longevity is early diagnosis which allows for early intervention. So I focused on a way that can allow for easier ways to diagnose patients. I hope the readers understand that Alzheimer’s is very complex, and there is still a lot to learn, but also there has been a lot of research to further our knowledge about AD.
Introduction:
As the population over 65 years of age increases, the prevalence of Alzheimer’s Disease in the United States is projected to triple to 14 million by 2060 [1]. Alzheimer’s disease (AD) is a progressive neurodegenerative disease that begins with mild memory loss and can ultimately lead to death. It is characterized by the accumulation of amyloid-β and tau neurofibrillary tangles in the brain [2]. These characteristics can be measured to determine the onset of Alzheimer’s at earlier stages. Currently, treatments only delay the onset and progression of symptoms. Early diagnosis is important because it identifies the disease before it causes irreversible damage and improves treatment efficacy. Early diagnosis can also aid research for new drugs that reverse the pathological effects of AD before it becomes irreversible. It is possible to detect biomarkers involved in AD early because “neurodegenerative processes … start up to 20-30 years before symptom onset” [3].
Aggregation of Tau proteins is a major distinguishing feature of AD. Neurofibrillary tangles (i.e. tau protein aggregates) inside neuronal axons block transport of nutrients and disturbs essential functions, which leads to damage and destruction of neurons in the brain [4]. Under normal conditions, Tau proteins are highly soluble (able to be dissolved) and are directly attached to microtubules to support the intracellular transport of proteins and organelles. In the brains of AD patients, Tau proteins are hyperphosphorylated and dissociate from microtubules, which “initiates the conformational change from natively unfolded tau into [insoluble] paired helical filament tau inclusions (protein aggregates) and neurofibrillary tangles” [5]. The dissociation of Tau proteins from microtubules leads to instability and breakdown of microtubules, which leads to neuronal dysfunction.
Currently, the evaluation of cerebrospinal fluid biomarkers (CSF) and positron emission tomography (PET) scans are widely used as diagnostic criteria for AD. The presence of the three main CSF biomarkers, Aβ42, T-tau, and P-tau, are established as diagnostic criteria for AD [6]. For example, a patient suspected of having AD may get their tau-PET or CSF p-tau checked to confirm diagnosis. However, drawing blood and evaluating blood biomarkers is less invasive than CSF biomarkers, which require a lumbar puncture, and are more cost-effective than PET scans. Retrieving biomarkers via blood is also more accessible at hospitals and local clinics because it does not require specialized instruments. This literature review focuses on the detection of abnormally high concentrations of tau proteins in blood to diagnose Alzheimer’s disease.
Methodology:
I used the UC Davis library website to access the databases PubMed and APA PsycInfo. I searched a combination of the following terms: “Alzheimer’s”, “Alzheimer’s Disease”, “blood biomarker”, “blood”, “biomarker”, “tau”, “diagnosis”, “early diagnosis”, “literature review”, and “meta-study”. I chose articles between 2014 to 2022 because research of tau blood biomarkers is a recent field with new advances each year.
Initially, I chose articles with titles such as “fluid biomarkers in diagnosis of Alzheimer’s” to understand the overall use of biomarkers in diagnosing AD. Reading meta-studies about blood biomarkers helped narrow my topic to tau proteins in blood. Afterwards, I skimmed literature reviews to find sections about tau proteins. I also read articles that specifically focused on tau proteins and their use in diagnosis of Alzheimer’s. I read titles and abstracts to rule out articles that only included biomarkers such as CSF biomarkers, microRNA, platelets, apolipoprotein B, or amyloid β peptides.
Results and Discussion:
Biomarkers that originate from the brain, such as tau proteins, are present at low concentrations in the systemic blood circulation because of the blood-brain barrier, which filters molecules moving in and out of the brain. However, in the 2021 article “Blood Biomarkers in Alzheimer’s Disease”, Miren Altuna-Azkargorta and Maite Mendioroz-Iriarte point out that “researchers have described blood-brain barrier dysfunction in patients with AD.” This dysfunction allows passage of molecules between the CSF and blood [3]. Brain protein concentration is low in blood because components of blood are complex and contain various other proteins and proteases, which mix with and hydrolyze (break down) proteins from the brain [7]. Therefore, more sensitive instruments are required to measure tau protein levels accurately and consistently.
However, there is an ongoing debate on the plausibility of t-tau in blood being used to diagnose AD. Lei Feng et al.’s 2021 article, “Current Research Status of Blood Biomarkers in Alzheimer’s Disease: Diagnosis and Prognosis,” reviews the various biomarkers in blood for AD diagnosis. In the section about t-tau proteins, they state that “t-tau may lack diagnostic specificity for AD because of its elevation [in concentration] in a series of pathologies, such as epilepsy and corticobasal degeneration” [7]. Another article, “Review: Tau in Biofluids – Relation to Pathology, Imaging and Clinical Features”, written by Henrick Zetterberg in 2017, is skeptical of blood plasma t-tau proteins because they lack specificity for AD and have a shorter half-life than CSF t-tau [8]. Since levels of t-tau are elevated for other neurodegenerative diseases, these tests may yield a false positive result for AD.
However, Bob Olsson et al. are hopeful about the prospects of plasma t-tau being used to diagnose AD. In their 2017 meta-analysis, “CSF and Blood Biomarkers for the Diagnosis of Alzheimer’s Disease: a Systematic Review and Meta-analysis,” they perform a systematic review of eleven research papers that assess t-tau in blood, including a total of 271 AD patients and 394 controls. With the combined data, the authors conclude that there is a significant difference in t-tau levels in blood between AD patients and control. Even though this association between elevated t-tau levels and AD has been found, Olsson et al. admit that there is large variation among the few studies available. Therefore, more studies must be done to establish a clearer association between t-tau and AD [6].
In 2016, Niklas Mattsson et al. published “Plasma Tau in Alzheimer Disease”, which looks at a total of 1284 participants between two cohorts: BioFINDER, which is in Sweden, and ADNI, located in the United States and Canada. The authors compare levels of tau proteins in blood plasma between patients with AD, patients with mild cognitive impairment (MCI), and people with normal cognition. With the cohort of patients in the ADNI program, the researchers found that there was an increase in plasma tau in AD patients, but they were not able to replicate these results with the BioFINDER cohort [9]. Varying results between the cohorts can suggest that association between plasma tau and AD is low, but it should be noted that this study was carried out across different locations with different handling protocols, inclusion criteria and technologies [9]. These confounding variables may have affected the results of this study.
Recent studies using ultrasensitive immunoassay methods show more promising results for detecting tau proteins. Leian Chen et al.’s article, “Plasma Tau Proteins for the Diagnosis of Mild Cognitive Impairment and Alzheimer’s Disease: a Systematic Review and Meta-analysis,” reviews 56 studies and summarize which technologies are effective at detecting an elevation in tau protein concentration, from people with normal cognition to MCI to AD patients. They find that immunomagnetic reduction technique (IMR) and Single molecule array (Simoa) assay methods detect differences in p-tau181, p-tau217, and p-tau231 levels across all three groups [2]. More specifically, blood “p-tau217 [is] more sensitive than p-tau181 and p-tau231 … because p-tau217 is more tightly related to the formation of Aβ plaques in the brain” [2], which is a distinguishing feature of AD. IMR is also consistent in detecting differences in t-tau levels between normal, MCI, and AD groups [2]. This shows that new technologies are starting to show more consistency in data reproduction of blood biomarkers, which prior research lacked. However, elevated plasma p-tau181 and p-tau217 levels are also found in other diseases like chronic kidney disease, hypertension, myocardial infarction, and stroke [2] – similar to tests for t-tau. Future research could focus on differentiating p-tau protein levels indicated by AD versus other diseases.
Two articles observe p-tau181 for early diagnosis of AD. Joyce R. Chong et al. wrote the article, “Blood-based High Sensitivity Measurements of Beta-amyloid and Phosphorylated Tau as Biomarkers of Alzheimer’s Disease: A Focused Review on Recent Advances,” in 2021, and it looks at studies that observe p-tau181 using Simoa immunoassay platform. They find that plasma p-tau181 can “differentiate between AD and non-AD neurodegenerative diseases” because it is associated with other AD-specific pathologies such as “NFT burden, grey matter atrophy, hippocampal atrophy, cortical atrophy brain, metabolic dysfunction and cognitive impairment” [10]. The study also reports that “the earliest increases in plasma p-tau181 occurred shortly before PET and CSF Aβ markers reached abnormal levels” [10]. Another article, written by Syed Haris Omar and John Preddy in 2020, titled “Advantages and Pitfalls in Fluid Biomarkers for Diagnosis of Alzheimer’s Disease,” looks at a study that used IMR to observe p-tau181. Omar and Preddy conclude that plasma p-tau181 can differentiate between AD and cognitive decline due to age because there is an increase in the ratio of p-tau181 to t-tau from control to patients with mild AD: 14.4% and 19.5%, respectively [11]. Therefore, developing more accurate detection methods of plasma p-tau181 may allow for earlier diagnosis of AD than current diagnosis procedures, which uses PET scans and CSF biomarkers, because it is an AD specific biomarker.
Even though plasma tau currently cannot be used to diagnose Alzheimer’s Disease, recent studies show promising use in preclinical settings. In 2022, Rik Ossenkoppele et al. published the article “Tau Biomarkers in Alzheimer’s Disease: Towards Implementation in Clinical Practice and Trials.” Ossenkoppele et al. looked at tau pathology identified through PET, CSF, and plasma. The authors recommend using plasma p-tau as a “screening method to identify preclinical Alzheimer’s disease among cognitively unimpaired individuals,” but a “positive result should be confirmed using tau-PET or CSF p-tau” [5]. In “Plasma Tau Association with Brain Atrophy in Mild Cognitive Impairment and Alzheimer’s Disease,” Kacie D. Deters and her colleagues also highlight plasma tau’s use as a screening tool rather than a diagnostic tool for “cognitively normal or mildly symptomatic older adults” [12]. Currently, there is more research confirming the reliability of tau-PET and CSF p-tau in diagnosing AD compared to plasma tau. However, PET scans and collecting CSF samples for all patients suspected to have AD is not cost-effective and may not be available in clinical settings without special instruments to perform these tests. Since collecting blood samples is easier, checking for plasma p-tau in the pre-clinical phase can narrow down the patients who will need the more invasive procedures.
Conclusion:
The research on plasma tau proteins is new and has made significant progress in the past decade, but more research must be done on this topic. A more uniform way of measuring tau proteins must be established so studies can be replicated and yield similar results. Because these are recent advances, longitudinal studies are much needed. The progress of tau proteins in blood should be monitored in patients with AD to observe their effects in disease progression. In the near future, research on tau proteins can be used to develop drugs to treat AD.
Preliminary evidence for differential habitat selection between bird species of contrasting thermal-tolerance levels
By Phillips.
Author’s note: Since coming to college, I have wanted to conduct research on the environmental impacts of agriculture and contribute to efforts to make farming work for both people and nature. In pursuit of this goal, I signed up as an intern with Daniel Karp’s agroecology lab in my freshman year and stayed with them for my entire undergrad. During this internship, I worked alongside several Ph.D. students, such as Katherine Lauck and Cody Pham, who research the cumulative effects of land conversion and climate change on native avifauna at Putah Creek. I was so inspired by their work that I decided to conduct an independent project investigating similar phenomena. Specifically, I was curious about how birds respond to temperature across multiple landscapes, and how this pattern of behavior might influence their choice of habitat. While reading this paper, I would like you to consider the broader implications of the findings as they pertain to species conservation in the context of climate change.
Abstract
Increasing frequency and severity of temperature spikes caused by climate change will disproportionately impact heat-sensitive species. However, certain types of vegetation may protect animals from temperature spikes. Heat-sensitive species can retreat to shaded microhabitats when temperature increases, allowing them to avoid detrimental effects on fitness. Here, we examined habitat selection and behavioral responses to temperature of Western Bluebirds (Siailia mexicana) and Northern Mockingbirds (Mimus polyglottos). We conducted transect surveys and collected behavioral data on bird movement for two months in riparian forest and perennial cropland in the Central Valley of California, where breeding season temperatures are often above 35°C. Bluebirds were observed more frequently in shaded riparian forest, while mockingbirds were observed more frequently in exposed agricultural fields. Correspondingly, bluebirds became less active at higher temperatures, while mockingbirds exhibited no response. Together, our results imply that heat-sensitive species may be more likely to select natural or semi-natural habitats and change their behaviors when temperatures spike. The results of this study imply that the combined effects of anthropogenic land development and climate change may be more destructive for heat-sensitive species than for heat-tolerant species.
Introduction
Climate change is increasing the frequency and intensity of temperature spikes across the world [1]. Many species will likely experience increased mortality due to these extreme conditions [2–4], with heat-sensitive species experiencing especially detrimental effects [5,6]. However, thermally-buffered habitats could mitigate the impact of heat spikes on organisms, as certain habitat features, like vegetative cover, have been shown to cool local temperatures through shading and evapotranspiration [7,8]. Landscapes with high amounts of thermally-buffered habitats, such as closed-canopy forests, have been shown to have less dramatic temperature extremes than open habitats [9,10]. Furthermore, it has been shown that animals in these thermally-buffered habitats are less likely to be impacted by rising global temperatures [11,12]. As such, organisms that are sensitive to temperature extremes may preferentially select for these habitats, and therefore may be able to avoid potentially lethal effects. Birds have been observed to retreat to shaded habitats when temperatures spike [13]. However, it is unclear whether heat-sensitive species specifically select for thermally-buffered habitats, or if heat-tolerant species persist in non-buffered habitats. Therefore, we sought to understand how the habitat selection of bird species may be associated with their behavioral responses to temperature.
Bird populations in North America are in rapid decline [14], and are predicted to continue declining with climate change [15]. As such, determining the habitat requirements of birds in response to increasingly extreme temperatures could be crucial to their conservation. We conducted behavioral surveys of birds in the Central Valley of California to address two questions: 1) does habitat selection differ between Western Bluebirds and Northern Mockingbirds, and 2) are behavioral responses to temperature different between these species? We hypothesized that birds species which exhibit significantly different behavior during high temperature will preferentially select habitats with more vegetation cover.
Methods
Experimental design
We selected two sites along Putah Creek in the Central Valley of California. In this system, temperatures often reach 35°C during the hottest months of the year. These sites contain a combination of riparian (forest existing along a river bank) and agricultural land and are approximately five miles apart from each other. At each site, the two focal land cover types–riparian forest and perennial agriculture–were present within one half-mile of each other (Figure 1). We obtained observations along four 100 meter (m) transects. In the riparian areas, we placed transects along regions of the sites where vegetation was sparse enough that birds could be observed, as dense vegetation made it difficult to track the individual birds. In the agricultural areas, we placed transects along areas that were close enough to the crops that birds could be spotted. Transects were placed approximately 50 meters apart from each other.
We focused on Western Bluebirds (Siailia mexicana) and Northern Mockingbirds (Mimus polyglottos) due to their high abundance at Putah Creek. Additionally, we chose these species because they forage on the ground rather than in the air, and therefore were easier to observe with the naked eye.
Figure 1. Our two study sites were in close proximity to both riparian and agricultural habitats along Putah Creek. At each site, we observed birds along a total of 16 transects (depicted in red).
Data collection
We conducted our surveys from late April to early July 2022, the height of the breeding season for our study species. We visited each site at least once a week, in either the morning or early afternoon. During each visit, I would walk along the transects. Once a bird of either target species was spotted, I would track the bird for two minutes and record all behaviors displayed, along with the amount of time spent engaging in each behavior. These behaviors included “foraging” (searching for, chasing, or eating an insect), “moving” (locomotion with wings or legs), “resting” (standing or sitting motionless), “singing” (repetitive vocalization for more than three seconds), “preening” (use of beak to position feathers), and “disputing” (fighting between birds that occurs due to territorial disputes). I recorded temperature and wind speed each hour using a Kestrel 2000 Weather Meter.
Data analysis
We ran Fisher’s exact tests to determine if mockingbirds and bluebirds preferentially selected different landscape types across sites. The variables in this model included ‘species,’ and ‘landscape type,’ which was defined as either “Agriculture” or “Riparian.” We ran the model across both sites and did not distinguish between the two separate sites depicted in Figure 1.
Then, we implemented multiple linear regression models examining the relationship between the time spent engaging in various behaviors and temperature for each species. We considered the time spent engaged in a particular behavior to be the percentage of time during the two-minute observation period in which the individual bird exhibited that behavior (i.e., time spent moving, foraging, resting, preening, singing, or disputing).
To account for the effects of spatial autocorrelation (or the tendency of areas which are close together to provide similar data values), we first included a site covariate in our models. We additionally attempted to control for the effects of a natural circadian rhythm on behavior by including a time-of-day covariate. As temperature and time were highly correlated (r = 0.696 for bluebird observations and 0.548 for mockingbird observations), we included these covariates using a temperature residual approach. Specifically, we regressed time against temperature and obtained residual values, representing whether temperatures were hotter or cooler than the average expected temperature at any given time of day. We then ran a multiple linear regression including the effects of temperature residuals, time of day, and site on bird behavior.
Results
Landscape preference
Bluebirds and mockingbirds exhibited significantly different habitat preferences. Bluebirds preferentially resided in riparian areas, whereas mockingbirds preferentially resided in agricultural landscapes across both sites (Fisher’s exact test, p = 4.583E-15; Figure 2).
Figure 2. Mockingbirds (n=34) are observed to reside in agricultural landscapes more frequently than riparian landscapes. Bluebirds are observed to reside in riparian landscapes more frequently than agricultural landscapes (n=35).
Changes in patterns of behavior
We found that temperature negatively affected the amount of time that bluebirds spent moving (Linear regression, p = 0.0077, F = 8.069, df = 1, 33; Figure 3; Supp. 1). However, temperature did not significantly affect mockingbird movement (Linear regression, p = 0.297, F = 1.125, df = 1, 32; Figure 3; Supp. 1).
Results were broadly similar after including ‘site’ as another effect in the model to account for multiple observations at the same location. Specifically, temperature still did not affect mockingbird movement (Multiple regression, p = 0.635, F = 0.577, df = 3, 30; Supp. 3) and marginally affected bluebird movement (Multiple regression, p = 0.0682, F = 2.622, df = 3, 31; Supp. 3). However, one of the sites had very few bluebird observations (n=4); when this site was removed from the model, temperature again negatively affected bluebird movement (Linear regression, p = 0.0123, F = 7.138, df = 1, 29; Supp. 2).
The last model we ran tested the effects of both temperature residuals and time of day on bird behavior. Using these models, temperature again did not have a significant effect on the behavior of mockingbirds but did have a marginal effect on bluebird movement (p = 0.07; Supp. 4).
For all of the models, resting, foraging, disputing, singing, and preening of bluebirds and mockingbirds exhibited no significant association with any environmental variable (Supp. 1, Supp. 2, Supp. 3, Supp. 4).
Figure 3. Bluebirds (left) are observed to reduce the percentage of time they spend moving as temperature increases. Mockingbird movement (right) did not significantly decline with rising temperature. The black points represent individual bird observations, the solid lines represent the linear model predictions, and the gray bands represent the 95% confidence intervals.
Discussion
Our results suggest that bluebirds select for shaded riparian habitats, while mockingbirds select for exposed agricultural habitats. Correspondingly, the temperature-altered patterns of movement in bluebirds suggest that they are sensitive to heat and may potentially select for thermally-buffered habitats as a result. In contrast, a lack of observed heat sensitivity in mockingbirds suggests that persistence in open habitats could in part be driven by thermal tolerance. While more data are required to make definitive conclusions, considering only patterns at our site with sufficient data, we found significant evidence for temperature-altered patterns of movement. Together, these results suggest that temperature sensitivity could drive patterns of habitat selection.
Previous research also suggests that habitats with low vegetative cover (i.e., without thermally-buffered microclimates) are likely to contain heat-tolerant species [16,17]. For example, Wilson et al. 2007 demonstrated that populations of leaf-cutter ants (Atta sexdens) residing in cities took 20% longer to succumb to high temperatures than ants dwelling in rural areas. In Brans et al. 2017, it was observed that water fleas (Daphnia magna) from urban areas were more tolerant to high temperatures than rural populations, partially because they had smaller body sizes. Both studies imply that organisms must have high heat tolerance to live in habitats with low vegetative cover. This is similar to our finding that mockingbirds, a heat-tolerant species, were more likely to reside in unvegetated agricultural landscapes than were bluebirds, a heat-sensitive species. However, while the previous studies provide evidence that organisms become heat-tolerant in these landscapes due to natural selection, our findings suggest that behavioral differences between heat-tolerant species and heat-sensitive species may also cause unvegetated landscapes to become dominated by heat-tolerant species.
Additionally, we demonstrate that riparian and other thermally-buffered habitats could be crucial to the persistence of heat-sensitive species. Other studies have shown that vertebrates are more likely to exhibit heat-related mortality in habitats with low vegetation cover [12,18]. For example, Zuckerberg et al. 2018 demonstrated that avian survival in small grassland patches was negatively associated with temperature, while survival in large grassland patches was not. Additionally, Lauck et al. 2023 showed that temperature spikes are associated with a decline in bird reproduction across the continental United States for organisms living in agricultural areas, but not for organisms living in forests. These results suggest that vegetation protects vertebrates from heat stress. Although the mechanisms of this protection are not clear, one potential explanation is that vegetation provides shaded areas that animals can use as refuges to avoid lethal temperatures [7]. Additionally, it has been shown that plants regulate local temperatures through evapotranspirative cooling [8], potentially playing a role in protecting vertebrates from heat spikes.
One caveat of our study is that bluebird responses were only marginally significant under multiple regression models that included time of day as a covariate. Associations between bird behavior and time could either be due to circadian rhythms or temperature shifts; it is difficult to statistically disentangle the effects of temperature and time of day. However, the significant results from the models including only temperature imply that bluebirds do indeed alter their behavior in response to environmental factors that likely include temperature.
Conclusion
Our findings provide preliminary evidence that Western Bluebirds are temperature-sensitive and preferentially select vegetated habitats, while Northern Mockingbirds do not preferentially select vegetated habitats. To obtain enough data to provide definitive evidence of these patterns, the methods could be repeated for several more years and across more sites. Nonetheless, the results from this study suggest that anthropogenic land development will be more destructive for heat-sensitive species than for heat-resistant species. As such, we suggest incorporating thermally-buffered habitats such as groups of trees or hedgerows in working landscapes to mitigate the negative impacts of anthropogenic land development on heat-sensitive organisms.
Interview: John Davis
By Isabella Krzesniak.
INTRODUCTION
John Davis is a 5th year Ph.D. candidate in the Integrative Genetics and Genomics graduate group at UC Davis. He works in the Maloof Lab and uses bioinformatics to analyze genetic variation among native California wildflowers in the Streptanthus clade in different environments and uses data to create gene models.
The project he is working on has two main goals. First, he aims to create genomic resources for Streptanthus clade species through reference genomes and transcriptomes, which can be used to analyze differential gene expression in different individuals. Second, he aims to examine the germination niche of Streptanthus clade species, the conditions required for them to germinate and the gene networks expressed during this life stage.
These models have many applications concerning adaptation in the wake of climate change; for instance, they can help ecologists make informed decisions such as whether a crop will function well in a given region as the climate warms. Davis’ work is part of a collaborative study between the Maloof, Gremer, Strauss, and Schmitt labs in the Department of Plant Biology.
What does your research consist of and what are its potential applications?
We’re looking at how plant populations persist in different environments. So even though it’s wildflowers that are closely related, you can also look at how they differ in terms of survival in different environments. If you have an environment that’s great for one crop, but it’s either getting wetter or hotter, the crop might not survive very well. But if you know which genes it has or how it functions, you can move it to a different location or potentially just bring in a different crop that will function well in that region. From an ecological standpoint, it’s a matter of which species will survive and which ones will die off. Underlying all of it are what genes the plant has.
What work are you doing with the project in particular?
The main thing I’m doing right now is building genomic references. We’re trying to do gene expression studies, but if we don’t know what the genes are, we can’t compare the differences in gene expression. So, one of the things I’m doing is building these reference genomes and transcriptomes to determine which genes are in the species. And then from there, I hope to build gene models, construct coexpression networks, and predict germination based on gene expression profiles. To analyze the data, I use Python, Linux, Excel, and R. Another thing I’m doing is building transcriptomes which are collections of just the genes that are expressed. Then, ideally, my goal would be to develop gene networks that would basically tell us which species have these genes that are needed to survive in these environments and which ones don’t.
Why are you studying Streptanthus in particular and what exactly are you doing as part of the study?
The Streptanthus clade has a well-documented phylogeny of closely-related species. Adding genomic resources will improve our ability to perform genetic analyses.
After seed collection, what steps do you take to analyze your data?
We took our seeds and sent them to a collaborating company where they extracted the RNA and then prepared RNA-seq libraries (where they extracted the RNA and then prepared the data), which were then sent to the UC Davis Genome Center where they were sequenced. and then the Genome Center sends us back the sequence reads. We have those reads, we use those to assemble transcripts and to also do gene expression analysis, where we start relating and making models to compare gene expression to different climate variables like precipitation, temperature, and elevation. They’re all correlated.
What kind of models do you employ for data analysis?
It’s just basic linear models and other types of models. You have your variable, which in our case would be germination proportion, and that is a function of gene expression. Gene expression is affected by temperature, genotype, and precipitation, so it’s just models on models.
Has the project been successful?
We did what we set out to accomplish with the funding. Right now, the final bit of sequencing data is coming in and then we’re actually starting to dive into it and produce actual results.
What are the difficulties of working with plants?
I love genetics and genomics stuff and I just fell into working with plants. Plants are the hardest compared to bacteria and humans. Plant genomes are ridiculous and weird things happen all the time. Humans are diploid–we’re boring. I finished working on a project with Brassica napus. It’s an allotetraploid (having four sets of chromosomes derived from different species), which is a hybrid of two different plants, Brassica rapa and Brassica oleracea, so it has two separate diploid genomes in itself. You have the two genomes that are crossing over with each other through homeologous exchange. So when you’re going try to assemble that genome, you don’t know if it came from the Rapa genome or the Oleracea genome. I think strawberries are up to eight copies of each chromosome, so it makes it a lot harder when you’re trying to find alleles. When you’re doing an experiment where you’re trying to knock out alleles of a genome, you have to knock out every copy in each chromosome. Whereas in humans, you only have to knock out two of them to make it homozygous. But in a strawberry, you have to knock out all six of those mutations. Plants just seem like the hardest of the group. And then you have pine trees where genomes have 22.5 gigabases (20 billion base pairs) and humans only have 3.2 gigabases.
How has extreme weather (wildfires, flooding) over the past years affected the study?
So one of the struggles of our project is that we’re looking at how the climate affects germination, but at the beginning of our project, there were droughts like crazy and wildfires, and that affects the genetics of the population and what survives and what doesn’t.
You’re trying to do all these environmental studies that look at the long-term effects compared to now, but when you’re a grad student on a grant, the grant only lasts four to five years. But, how do you take four to five years of data and project it out decades ahead without having data from decades prior? It just gets difficult when you only have four seasons that you can collect data from, and two of those are on fire and one of them is flooding. None of this seems like normal conditions historically. So it can make it a little bit tough to tease out what’s long-term variation in genetics in response to what’s happening in the environment right now.
What makes ecological, as opposed to transgenic research, difficult?
With our studies, we don’t knock out any genes or use any transgenics. Ours is all ecology. That’s the difficulty of our project. With Arabidopsis (a model organism in plant biology), the genes are pretty much homozygous and it’s a lot easier. In our case, all the seeds are collected in the wild, so they’re going to be heterozygous. We can try to make more of the seed by breeding the greenhouse to expand our seed stock, but we can only do so much since it takes up space to make more seed. The field is always going to be changing too. When you collect seeds from one year, the genetics could be completely different from the genetics of the next year.
Why don’t you use transgenics in your studies?
You don’t want to dive into transgenics (organisms whose genes have been altered) because there’s so much pushback against it. These are all natural California species and you don’t want to put something in the environment that can outcompete the natural population.
We’re trying not to affect the study environment that we’re looking at. When we do seed collections, we don’t take from at-risk populations of the certain species, and when we collect seeds, we only take a percentage of the seeds from each plant. We don’t want to affect the growth for the next season, so ultimately, we’re trying to do the minimum amount of disruption to the environment that we’re studying. We potentially hope to use our results for rehabilitation efforts. We’ll be able to tell which ones need more help to survive and which ones are fine.
Canine Cloning: History and Recent Trends
By Sara Su, Animal Science and English ’24
INTRODUCTION
In 1996, Dolly the sheep was the first mammal to be successfully cloned [1, 2]. Since then, 22 other animal species have been cloned, including rats, mice, cattle, goats, camels, cats, pigs, mules, and horses [3-12]. Among these, about 19 species have clones surviving to adulthood. In 2005, the first cloned dog to survive to adulthood was named Snuppy, who was derived from somatic cells from the ear-skin of a male Afghan hound. He was the 15th animal to be cloned and lived to the age of 10 [13]. He was cloned using Somatic Cell Nuclear Transfer (SCNT), a common method that involves removing the nucleus from an oocyte (egg cell) and replacing it with a nucleus from a somatic cell, typically a fibroblast [14]. Fibroblasts are a type of stromal cell found in connective tissues such as the skin and tendons – they are often used in cloning because they are relatively easy to culture. After nuclear transfer, this reconstructed oocyte, which is similar to a fertilized egg, is then activated and transferred into the oviduct of a surrogate female, usually in groups of 10-15 cells. After pregnancy is confirmed, viable offspring are born via C-Section [13-16]. Though SCNT is the most viable method of cloning so far, it remains very inefficient and the live birth ratio is extremely low [13, 17, 18]. Nearly 30 years after Dolly, little is understood about cloning, which presents unique challenges to different species; this review will discuss the relevance of the canine model in regards to human health, canine-specific challenges in using SCNT for cloning, as well as recent trends among successfully cloned dogs.
The Interest in Dogs as a Medical Model
Overall, dogs have become increasingly relevant as a medical model for human diseases in the 21st century. This is because many heritable canine diseases are orthological to human ones, which means they share similar traits and functions [19]. The dog genome is found to be closer to the human genome than the mouse genome – while mice are commonly used as medical models for humans, canine models have also proven to be useful for comparative studies due to their relatively long life, larger size, and similar tissue functions[20]. There are at least 350 shared genetic diseases between dogs and humans discovered so far, affecting a variety of systems such as the dermatological, lysosomal, hematological/immunological, and muscular/skeletal systems [21]. All of these contribute to the rising application of canine medical models to study disease mechanisms for well-known conditions such as Alzheimers and diabetes, while also being able to explore clinical therapies for rare genetic diseases that would otherwise be difficult to study. Other fields of study using canine colonies include but are not limited to: organ transplants, drug development, non-invasive biomarker generation, and psychological disorders [19-24].
Dog-Specific Challenges in Cloning
There are a few species-specific reasons why dog cloning remains inefficient. Cloning efficiency, defined as the ratio of live offspring coming from the number of transferred reconstructed oocytes, is usually not more than 3% across all species, regardless of the age or type of donor cell. Additionally, the average cloning efficiency between breeds does not differ significantly [21]. For canines, cloning efficiency is higher than many other reported species, at 2% [25]. However, this is still an extremely low number, and a specific challenge when it comes to dogs is the viable maturation of oocytes in vitro [13, 26]. It should be noted that the pregnancy rate can be increased by increasing the number of reconstructed oocytes injected into surrogates, but cloning efficiency itself is not changed. Another issue with canines is the vast number of breeds within the species – it is difficult to select for compatible nuclear/oocyte donors, in addition to adequate surrogate selection [21]. Finally, a widespread issue with cloned individuals is postnatal care – although survival is pretty much guaranteed for clones that are born healthy, cloned animals are just as vulnerable to disease and poor management as any other species [21]. Dolly the sheep died early from such an instance, rather than complications directly related to cloning. Overall, there is insufficient knowledge of the nuances of canine reproductive systems, including a lack of comprehensive protocols regarding oocyte maturation in culture and specific methods of post-natal clone care, leading to further difficulties in dog-specific cloning.
Snuppy and His Clones
As previously stated, the first dog to be successfully cloned and survive to adulthood was a male Afghan hound named Snuppy, short for Seoul National University Puppy. Snuppy was born in 2005, and was the only survivor out of 123 recipient surrogates. Snuppy was cloned from fibroblast cultures derived from the biopsy of the ear-skin of an Afghan hound named Tai. He was confirmed to be genetically identical to Tai through the use of canine-specific biomarkers [27]. For this experiment, 3 out of 123 surrogates resulted in pregnancies, 2 were carried to term, and 1 survived to adulthood – the other puppy died on day 22 due to aspiration pneumonia after experiencing neonatal respiratory distress. Although the efficiency of cloning is very low in the first place, this particular experiment had a cloning efficiency rate much lower than expected – 2 puppies were born to 123 surrogates, or 1.6% [13]. Snuppy ended up living to be 10 years old, while his donor Tai lived to be 12 years old – both individuals died of cancer-related causes, but were generally healthy until then. It should be noted that the median lifespan of Afghan hounds is reported to be 11.9 years, so their lifespans were not out of the norm [28].
In 2017, Snuppy was cloned. This time the cloning efficiency and success rates were much higher and resulted in 3 clones who are still alive today. Rather than using fibroblasts, this experiment used adipose-derived mesenchymal stem cells (ASCs). Then, ASCs were cultured with Dulbecco’s Modified Eagle Medium(MDEM), a technique that increases oocyte fusion rate in SCNT[29]. This experiment resulted in pregnancy and delivery rates of 42.9% (3 dogs out of 7 recipients) and 4.3% (4 clones out of 94 embryos). Compared to Snuppy’s 2.4% (3 out of 123) and 0.2% (2 from 1,095), these changes in technique correlated in a huge jump in overall efficiency [13, 28].
Other studies have also been published exploring the viability of cloned working dogs. For dogs, SCNT can be used regardless of sex, age, and breed [13, 30]. It was recently concluded that cloned dogs have similar behavior patterns to their cell donors, and can lead healthy lives with life spans comparable to naturally bred dogs [21, 29]. Overall, about 20% of dog breeds recognized by the American Kennel Club have been successfully cloned, which is highly successful compared to other mammals [21]. Though more research is needed to improve dog cloning efficiency, it has already been proven that clones of drug detection dogs[31] and cancer-sniffing dogs[33] outperform naturally bred dogs, scoring higher averages on qualification tests for these services [32-33].
CONCLUSION
To conclude, both studies regarding the creation of Snuppy and the subsequent cloning of his cells demonstrate great potential for the common use of canine clones in the modern world. Multiple obstacles regarding canine cloning were recognized and overcome, though the cloning efficiency rate can be further improved by obtaining greater knowledge of the canine reproductive system. Additionally, it was proven that clones who are born healthy aren’t at a larger risk for diseases or a shortened lifespan – they are comparable to the average puppy. All of this contributes to the feasibility of cloning working dogs – studies are already exploring the possibility of using clones of dogs that perform drug- and disease- detection, knowing that the physical qualifications for such jobs are strongly linked to specific genetic traits.
REFERENCES
- Wilmut I, Schnieke AE, McWhir J, Kind AJ, Campbell KH. Viable offspring derived from fetal and adult mammalian cells. Nature. 1997 Feb 27;385(6619):810-3. doi: 10.1038/385810a0. Erratum in: Nature 1997 Mar 13;386(6621):200. PMID: 9039911.
- Campbell KH, McWhir J, Ritchie WA, Wilmut I. Sheep cloned by nuclear transfer from a cultured cell line. Nature. 1996 Mar 7;380(6569):64-6. doi: 10.1038/380064a0. PMID: 8598906.
- Zhou Q, Renard JP, Le Friec G, Brochard V, Beaujean N, Cherifi Y, Fraichard A, Cozzi J. Generation of fertile cloned rats by regulating oocyte activation. Science. 2003 Nov 14;302(5648):1179. doi: 10.1126/science.1088313. Epub 2003 Sep 25. PMID: 14512506.
- Wakayama, T., Perry, A., Zuccotti, M. et al. Full-term development of mice from enucleated oocytes injected with cumulus cell nuclei. Nature 394, 369–374 (1998). https://doi.org/10.1038/28615
- Kato Y, Tani T, Sotomaru Y, Kurokawa K, Kato J, Doguchi H, Yasue H, Tsunoda Y. Eight calves cloned from somatic cells of a single adult. Science. 1998 Dec 11;282(5396):2095-8. doi: 10.1126/science.282.5396.2095. PMID: 9851933.
- Cibelli JB, Stice SL, Golueke PJ, Kane JJ, Jerry J, Blackwell C, Ponce de León FA, Robl JM. Cloned transgenic calves produced from nonquiescent fetal fibroblasts. Science. 1998 May 22;280(5367):1256-8. doi: 10.1126/science.280.5367.1256. PMID: 9596577.
- Baguisi, A., Behboodi, E., Melican, D. et al. Production of goats by somatic cell nuclear transfer. Nat Biotechnol 17, 456–461 (1999). https://doi.org/10.1038/8632
- Wani NA, Wernery U, Hassan FA, Wernery R, Skidmore JA. Production of the first cloned camel by somatic cell nuclear transfer. Biol Reprod. 2010 Feb;82(2):373-9. doi: 10.1095/biolreprod.109.081083. Epub 2009 Oct 7. PMID: 19812298.
- Shin, T., Kraemer, D., Pryor, J. et al. A cat cloned by nuclear transplantation. Nature 415, 859 (2002).
- Polejaeva, I., Chen, SH., Vaught, T. et al. Cloned pigs produced by nuclear transfer from adult somatic cells. Nature 407, 86–90 (2000). https://doi.org/10.1038/35024082
- Woods GL, White KL, Vanderwall DK, Li GP, Aston KI, Bunch TD, Meerdo LN, Pate BJ. A mule cloned from fetal cells by nuclear transfer. Science. 2003 Aug 22;301(5636):1063. doi: 10.1126/science.1086743. Epub 2003 May 29. PMID: 12775846.
- Galli, C., Lagutina, I., Crotti, G. et al. A cloned horse born to its dam twin. Nature 424, 635 (2003). https://doi.org/10.1038/424635a
- Lee BC, Kim MK, Jang G, Oh HJ, Yuda F, Kim HJ, Hossein MS, Kim JJ, Kang SK, Schatten G, Hwang WS. Dogs cloned from adult somatic cells. Nature. 2005 Aug 4;436(7051):641. doi: 10.1038/436641a. Erratum in: Nature. 2005 Aug 25;436(7054):1102. Erratum in: Nature. 2006 Mar 9;440(7081):164. Erratum in: Nature. 2006 Oct 12;443(7112):649. Shamim, M Hossein [corrected to Hossein, M Shamim]. PMID: 16079832.
- Takahashi K, Yamanaka S. Induction of pluripotent stem cells from mouse embryonic and adult fibroblast cultures by defined factors. Cell. 2006 Aug 25;126(4):663-76. doi: 10.1016/j.cell.2006.07.024. Epub 2006 Aug 10. PMID: 16904174.
- Kim GA, Oh HJ, Park JE, Kim MJ, Park EJ, Jo YK, Jang G, Kim MK, Kim HJ, Lee BC. Species-specific challenges in dog cloning. Reprod Domest Anim. 2012 Dec;47 Suppl 6:80-3. doi: 10.1111/rda.12035. PMID: 23279471.
- Moon, P. F., Erb, H. N., Ludders, J. W., Gleed, R. D. & Pascoe, P. J. Perioperative management and mortality rates of dogs undergoing cesarean section in the United States and Canada. J. Am. Vet. Med. Assoc. 213, 365–369 (1998).
- Malin K, Witkowska-Piłaszewicz O, Papis K. The many problems of somatic cell nuclear transfer in reproductive cloning of mammals. Theriogenology. 2022 Sep 1;189:246-254. doi: 10.1016/j.theriogenology.2022.06.030. Epub 2022 Jun 30. PMID: 35809358.
- Tsunoda Y, Kato Y. Recent progress and problems in animal cloning. Differentiation. 2002 Jan;69(4-5):158-61. doi: 10.1046/j.1432-0436.2002.690405.x. PMID: 11841470.
- Gurda BL, Bradbury AM, Vite CH. Canine and Feline Models of Human Genetic Diseases and Their Contributions to Advancing Clinical Therapies . Yale J Biol Med. 2017 Sep 25;90(3):417-431. PMID: 28955181; PMCID: PMC5612185.
- Parker HG, Kim LV, Sutter NB, Carlson S, Lorentzen TD, Malek TB, Johnson GS, DeFrance HB, Ostrander EA, Kruglyak L, 2004: Genetic structure of the purebred domestic dog. Science 304, 1160–1164.
- Olsson PO, Jeong YW, Jeong Y, Kang M, Park GB, Choi E, Kim S, Hossein MS, Son YB, Hwang WS. Insights from one thousand cloned dogs. Sci Rep. 2022 Jul 1;12(1):11209. doi: 10.1038/s41598-022-15097-7. PMID: 35778582; PMCID: PMC9249891.
- Sargan, D.R. IDID: Inherited Diseases in Dogs: Web-based information for canine inherited disease genetics. Mamm Genome 15, 503–506 (2004). https://doi.org/10.1007/s00335-004-3047-z
- Zhang J, Chen X, Kent MS, Rodriguez CO, 2009: Establishment of a dog model for the p53 family pathway and identification of a novel isoform of p21 cyclin-dependent kinase inhibitor. Mol Cancer Res 7, 67–78.
- Zhang J, Chen X, Kent MS, Rodriguez CO, 2009: Establishment of a dog model for the p53 family pathway and identification of a novel isoform of p21 cyclin-dependent kinase inhibitor. Mol Cancer Res 7, 67–78.
- Yanagimachi R. Cloning: experience from the mouse and other animals. Mol Cell Endocrinol. 2002 Feb 22;187(1-2):241-8. doi: 10.1016/s0303-7207(01)00697-9. PMID: 11988333.
- Lange-Consiglio, A. et al. Oviductal microvesicles and their effect on in vitro maturation of canine oocytes. Reproduction 154, 167–180. https://doi.org/10.1530/REP-17-0117 (2017).
- Lee JB, Park C; Seoul National University Investigation Committee. Molecular genetics: verification that Snuppy is a clone. Nature. 2006 Mar 9;440(7081):E2-3. doi: 10.1038/nature04686. PMID: 16528814.
- Kim MJ, Oh HJ, Kim GA, Setyawan EMN, Choi YB, Lee SH, Petersen-Jones SM, Ko CJ, Lee BC. Birth of clones of the world’s first cloned dog. Sci Rep. 2017 Nov 10;7(1):15235. doi: 10.1038/s41598-017-15328-2. PMID: 29127382; PMCID: PMC5681657
- Kim GA, Oh HJ, Lee TH, Lee JH, Oh SH, Lee JH, Kim JW, Kim SW, Lee BC. Effect of culture medium type on canine adipose-derived mesenchymal stem cells and developmental competence of interspecies cloned embryos. Theriogenology. 2014 Jan 15;81(2):243-9. doi: 10.1016/j.theriogenology.2013.09.018. Epub 2013 Oct 21. PMID: 24157230.
- Kim MJ, Oh HJ, Hwang SY, Hur TY, Lee BC. Health and temperaments of cloned working dogs. J Vet Sci. 2018 Sep 30;19(5):585-591. doi: 10.4142/jvs.2018.19.5.585. PMID: 29929355; PMCID: PMC6167335.
- Choi J, Lee JH, Oh HJ, Kim MJ, Kim GA, Park EJ, Jo YK, Lee SI, Hong DG, Lee BC. Behavioral analysis of cloned puppies derived from an elite drug-detection dog. Behav Genet. 2014 Jan;44(1):68-76. doi: 10.1007/s10519-013-9620-z. Epub 2013 Dec 17. PMID: 24343203.
- Kim MJ, Park JE, Oh HJ, Hong SG, Kang JT, Rhim SH, Lee DW, Ra JC, Lee BC. Preservation through cloning of superior canine scent detection ability for cancer screening. J Vet Clin 2015;32:352–355.
- Lee SH, Oh HJ, Kim MJ, Kim GA, Setyawan EMN, Ra K, Abdillah DA, Lee BC. Dog cloning-no longer science fiction. Reprod Domest Anim. 2018 Nov;53 Suppl 3:133-138. doi: 10.1111/rda.13358. PMID: 30474338.
Is Rejuvenating Research Akin to the Fountain of Youth?
By Barry Nguyen, Biochemistry & Molecular Biology
Authors note: I have always been interested in the aging research field. So much so, I watched ALL 8 podcasts episodes of Dr. David Sinclair’s aging podcast during the summer (which can be found on Spotify–highly recommend). A lot of the discussion is centered around developments in rejuvenating research and the various biological pathways associated with aging that can be activated depending on one’s lifestyle.
As we age, not only does our outward appearance change, but the biological clock hidden within our cells does too. The biological clock, an intrinsic feature shared among cells, allows for partial genetic reprogramming, creating an opportunity to defy the concept of time and aging [2]. This recent development of gene therapy is our closest bet to finding the Fountain of Youth.
About a decade ago, Shinya Yamanaka had shared the Nobel Prize for discovering a cocktail of proteins with the potential to revert somatic cells back into stem cells. These transcription factors are Oct 4, Sox2, Klf4, and cMYC and are now known as Yamanaka Factors [1]. Typically referred to as OSKM genes, the Yamanaka Factors play a significant role in regulating the developmental signaling network necessary for stem cell pluripotency (defined as the capacity to differentiate to virtually all types of cells) and therefore can revert the identity of virtually any cells in the body.
Recent advancements in the study of aging at the molecular level have been significant according to Dr. Diljeet Gill, a postdoctoral researcher at the Salk Institute’s Reik Lab, which conducts research on rejuvenation.“These developments have led to techniques that enable researchers to measure age-related biological changes in human cells,” says Dr. Gill [3].
Scientists have identified two defining phenomena of the aging process to assist in characterizing signs of aging. The first is the epigenetic clock, which describes the chemical tags present throughout the genome. The second hallmark is the transcriptome, which encapsulates all the gene readouts produced by the cells.
As an organism ages, the epigenetic markers become widely different. Epigenetic modifications are an intrinsic biological feature of aging, with older organisms showing a significantly different epigenetic profile than younger organisms [1]. Because Yamanaka Factors are able to alter the epigenetic landscape of somatic cells, reprogramming-induced rejuvenation strategies using the OSKM genes are made possible. Furthermore, an animal’s epigenome can be entirely reset by chemically modifying DNA and proteins that help regulate gene activity. Essentially, this form of gene editing allows scientists to revert the aging of cells.
Cells that have undergone cellular reprogramming not only appear younger, but also function like young cells. In a new study conducted in a collaboration between Dr. In Izpisua and the Altos Lab at the Salk institute have found that mice receiving long-term treatments of Yamanaka factors expressed a gene expression and metabolism profile that resembled that of much younger mice [2].
Results of the study may open up a future of therapeutic possibilities. Researchers observed notable effects in the APBA2 gene, a gene associated with Alzheimer’s Disease and the MAF gene, a gene associated with cataract development, in their transcriptional profile; both displayed a more youthful, more abundant level of transcription, meeting one of the criteria of reverse aging. The results were promising and, according to Dr. Gill, “proved that cells can be rejuvenated without losing their function and that rejuvenation looks to restore some function to old cells.” Moreover, Professor Reik, the group leader, stresses that future work can move towards targeting rejuvenating genes to reduce effects of aging.
The prospects of this new facet of aging research are extraordinary. However, it should be noted that yamanaka factors have the capacity to induce Teratomas, a germ cell tumor. Despite a limit in studies investigating the extent to which Yamanaka Factors can induce cell tumors, the ability for Yamanaka Factors to induce pluripotency and stem cell-like properties allow cells to reach a cancer-like state. Cancers are typically characterized as uncontrolled cell division. Furthermore, the differentiated cell’s ability to revert to pluripotency significantly increases the possibility for cells to take on cancer-like states.
Nevertheless, studies within this field are exciting, and researchers are united by a common goal of identifying methods to slow or even reverse the processes that lead to disease. As research continues, society is at a rapid pace in reaching a point where predicting, preventing, and even treating diseases through cellular rejuvenation becomes a reality.
References:
- Cellular rejuvenation therapy safely reverses signs of aging in mice. Salk Institute for Biological Studies. (2023, January 5). Retrieved February 5, 2023, from https://www.salk.edu/news-release/cellular-rejuvenation-therapy-safely-reverses-signs-of -aging-in-mice/
- Fan, S. (2022, April 4). Scientists used cellular rejuvenation therapy to rewind aging in mice. Singularity Hub. Retrieved February 5, 2023, from https://singularityhub.com/2022/04/06/scientists-used-cellular-rejuvenation-therapy-to-re wind-aging-in-mice/
- Garth, E. (2022, May 12). Research reverses aging in human skin cells by 30 years. Longevity.Technology – Latest News, Opinions, Analysis and Research. Retrieved February 5, 2023, from https://longevity.technology/news/research-reverses-aging-in-human-skin-cells-by-30-ye ars/
- Two research teams reverse signs of aging in mice | science | AAAS. (n.d.). Retrieved February 5, 2023, from https://www.science.org/content/article/two-research-teams-reverse-signs-aging-mice
The Fungus Among Us: Fungal Presence in Cancerous Growths
By Mirabel Sprague Burleson, Biological Sciences ‘24
Cancer can contaminate nearly every tissue in the human body, arising from complex and diverse mutations that impact many genes. It’s incredibly widespread and lethal and is currently one of the most common causes of death in the United States, second only to heart disease by a small margin [1]. Despite the immense resources poured into cancer research, it remains a major issue in modern medicine. One barrier to the advancement of cancer treatments is finding a way to successfully treat cancer cells without harming the healthy cells surrounding them. Common treatments such as chemical and radiation therapy damage normally functioning cells, resulting in dangerous and debilitating side effects.
In order to develop a treatment that is capable of successfully isolating cancer cells, researchers must understand the hallmarks of cancer. Knowing the distinct characteristics of cancerous cells, such as abnormal division rate, increased mobility, and irregular organelles, allows researchers to develop treatments that attack these specific biological targets. Modern research focuses on determining unique bacterial presence in cancerous cells, as bacteria are an easy target for many treatments.
Recent studies on bacterial presence in cancer found metabolically active cancer-specific communities of bacteria in tumor tissues, which lead to their inclusion in updated cancer hallmarks [2]. In the wake of these findings, Narunsky-Haziza et al. (2022) conducted a study to determine if fungi could also be detected in tumor tissues [3]. Fungal presence in cancer cells could provide a new target for treatments.
The Narunsky-Haziza et al. study sources samples from four independent cohorts: The Weizmann Institute of Science (WIS), The Cancer Genome Analysis (TCGA), Johns Hopkins University, and the University of California at San Diego (UCSD) [3]. Narunsky-Haziza et al. took 17,401 tissue, blood, and plasma samples across 35 cancer types from these four cohorts. 104 samples made of a waxy substance called paraffin and 191 DNA-extraction negative controls were added to the WIS cohort samples to account for potential contamination by environmental fungi or fungal DNA introduced during handling and processing (other cohorts’ samples had adequate controls for fungal presence) [3,4,5]. These samples were then reexamined by Narunsky Haziza et al. for fungal presence with internal transcribed spacer 2 (ITS2) amplicon sequencing [3].
The ITS2 region of nuclear ribosomal DNA is considered one of the best DNA barcodes for sequencing because of its variability between even very closely related species and the ease of amplification [6]. ITS2 amplicon sequencing allows researchers to examine the ITS2 region and identify variations between samples. Narunsky-Haziza et al. use this method to cross-examine known fungal sequences and the sequences found in the samples to identify the different fungal nucleic acids present in the samples’ mycobiomes (fungal microbiome) [3].
Using ITS2 amplicon sequencing, this study found that while tumor bacterial richness is much higher than fungal richness, there was a clear presence of fungi in the samples examined [3]. Fungi were detected in all 35 cancer types examined, although not all individual tumors were positive for fungal signals [3]. Most fungi were found to be within cancer and immune cells, similar to bacterial presence [3]. Interestingly, significant correlations were found between specific fungal presence and tumor types, immunotherapy, age, and smoking habits; however, whether this is correlated or casually associated is yet to be determined [3]. Also, an unexpected significant positive correlation between fungal and bacterial richness was found in bone, breast, brain, and lung samples, though not in any of the others [3].
This study does present several caveats. For one, differences in sample preparation, sequencing, bioinformatic pipelines, and reference databases exist between the four cohorts, which affect bacteriome analyses. Another potential issue is that although there was a large number of samples included, the stages of cancer across samples were different for all four of the cohorts, which created high variability in the data [4,5]. The WIS and TCGA cohorts also showed high variation in mycobiome richness, which Narunsky-Haziza et al. suspect is likely due to the negative controls introduced to the WIS cohort as well as potential split reads found in the TCGA cohort [3,4,5]. A split read is a sequence that partially matches the reference genome in at least two places, but has no continuous alignment to it. Split reads can indicate a difference in structure between the sample and the reference genome.
Additionally, while four different staining methods were used to find fungal presence and tumor-specific localization patterns, they proved to have differing sensitivities across cancer types. As all of the staining methods used can only detect certain subsets of the fungal kingdom, a relatively high false-negative rate can be expected. In contrast, although each cohort used negative controls, some false-positive results are inevitable [3].
Although this study successfully broadened the cancer microbiome landscape, these findings do not establish any causality in the presence of fungal nucleic acids. Narunsky-Haziza et al. hope that this first pan-cancer mycobiome atlas will serve as a key player in informing future cancer research to help characterize new information for cancer diagnostics and therapeutics [3]. While it remains unclear if fungal DNA plays a role in cancer development or severity, with further research, fungal presence could prove to be a helpful biomarker and potentially provide advancement in cancer treatments for the benefit of patients worldwide.
Could Training the Nose Be the Solution to Strange Post COVID-19 Odors?
By Bethiel Dirar, Human Biology ’24
Author’s Note: I wrote this article as a New York Times-inspired piece in my UWP102B course, Writing in the Disciplines: Biological Sciences. Having chosen the topic of parosmia treatments as a writing focus for the class, I specifically discuss olfactory training in this article. In the midst of the pandemic, this condition caught my attention once I found out about it through social media. It had me wondering what it would be like to struggle to enjoy food post-COVID infection. I simply hope that readers learn something new from this article!
Ask someone who has had COVID-19 if they’ve had issues with their sense of smell, and they may very well say yes. According to the CDC, one of the most prevalent symptoms of the respiratory disease is loss of smell [1]. However, there is a lesser understood nasal problem unfolding due to COVID-19: parosmia. Parosmia, as described by the University of Utah, is a condition in which typically pleasant or at least neutral smelling foods become displeasing or repulsive to smell and taste [2].
As a result of this condition, the comforts and pleasures of having meals, snacks, and drinks disappear. Those who suffer from this condition have shared their experiences through TikTok. In one video that has amassed 9.1 million views, user @hannahbaked describes how parosmia has severely impacted her physical and mental health. She tearfully explains how water became disgusting to her, and discloses hair loss and a reliance on protein shakes as meal replacements.
The good news, however, is that researchers have now identified a potential solution to this smelly situation that does not involve drugs or invasive procedures: this solution is olfactory training.
A new study shows that rehabilitation through olfactory training could allow patients with parosmia induced by COVID-19 to return to enjoying their food and drink. Olfactory training is a therapy in which pleasant scents are administered nasally [3].
Modified olfactory training was explored in a 2022 study as a possible treatment for COVID-19-induced parosmia. Aytug Altundag, MD and the other researchers of the study recruited 75 COVID-19 patients with parosmia from the Acibadem Taksim Hospital in Turkey and sorted them into two different groups. One group received modified olfactory training and another group served as a control and received no olfactory training [3]. Modified olfactory training differs from classical olfactory training (COT) in that it expands the number of scents used beyond COT’s four scents: rose, eucalyptus, lemon, and cloves [4 ]. These four scents were popularized in olfactory training use as they represent different categories of odor (floral, resinous, fruity, and spicy, respectively) [5].
For 36 weeks, the treatment group was exposed to a total of 12 scents twice a day that are far from foul. In each 12-week period, four scents were administered. For the first 12 weeks, they started with smelling eucalyptus, clove, lemon, and rose. During the next 12 weeks, the next set of scents were administered: menthol, thyme, tangerine, and jasmine. To round it off for the last 12 weeks, they smelled green tea, bergamot, rosemary, and gardenia scents. Throughout the study, the subjects would smell a scent for 10 seconds, then wait 10 seconds before smelling the next scent. The subjects completed the five minute training sessions around breakfast time and bedtime. [3].
To evaluate the results of the study, the researchers implemented a method known as the Sniffin’ Sticks test. This test combines an odor threshold test, odor discrimination test, and odor identification test, to form a TDI (threshold, discrimination, identification) score. According to the test, the higher the score is, the more normal the state of an individual’s olfactory perception is. A composite score between 30.3 and the maximum score of 48 indicates normal olfactory function while scores below 30.3 point to olfactory dysfunction [3].
The results of this research are promising. By the ninth month of the study, a statistically significant difference in average TDI scores had been found between the group that received modified olfactory training and the control group (27.9 versus 14) [3]. This has led the researchers to believe that with prolonged periods of the therapy, olfactory training could soon become a proven treatment for COVID-19-induced parosmia.
With this conclusion, there is greater hope now for those living with this smell distortion. Fifth Sense, a UK charity focusing on smell and taste disorders, has spotlighted stories emphasizing the need for effective treatments for parosmia. One member of the Fifth Sense community and sufferer of parosmia, 24-year-old Abbie, discussed the struggles of dealing with displeasing odors. “I ended up losing over a stone in weight very quickly because I was skipping meals, as trying to find food that I could eat became increasingly challenging,” she recounted to Fifth Sense [6].
If olfactory training becomes an effective treatment option, eating and drinking might no longer be a battle for those with parosmia. Countless people suffering from the condition will finally experience an improvement in their quality of life so desperately needed, especially with COVID becoming endemic.
REFERENCES:
- Centers for Disease Control and Prevention. Symptoms of COVID-19. Accessed November 20, 2022. Available from: https://www.cdc.gov/coronavirus/2019-ncov/symptoms-testing/symptoms.html
- University of Utah Office of Public Affairs. Parosmia after COVID-19: What Is It and How Long Will It Last? Accessed November 20, 2022. Available from: https://healthcare.utah.edu/healthfeed/postings/2021/09/parosmia.php
- Altundag Aytug, Yilmaz Eren, Caner Kesimli, Mustafa. 2022. Modified Olfactory Training Is an Effective Treatment Method for COVID-19 Induced Parosmia. The Laryngoscope [Internet]. 132(7):1433-1438. doi:10.1002/lary.30101
- Yaylacı Atılay, Azak Emel, Önal Alperen, Ruhi Aktürk Doğukaan, and Karadenizli Aynur. 2022. Effects of classical olfactory training in patients with COVID-19-related persistent loss of smell. EUFOS [Internet]. 280(2): 757–763. doi:10.1007/s00405-022-07570-w
- AbScent. Rose, lemon, clove and eucalyptus. Accessed February 5, 2023. Available from: https://abscent.org/insights-blog/blog/rose-lemon-clove-and-eucalyptus
- Fifth Sense. Abbie’s Story: Parosmia Following COVID-19 and Tips to Manage It. Accessed November 23, 2022. Available from: https://www.fifthsense.org.uk/stories/abbies-story-covid-19-induced-parosmia-and-tips-to-manage-it/
Western Sandpiper Population Decline on the Pacific Coast of North America
By Emma Hui, Biological Sciences ‘26
INTRODUCTION
The migration of Western Sandpipers from the high Arctics to Southern California has always been a treasured gem in the fall. Yet as decades roll by, Western Sandpiper populations have been in continuous decline, and the rugged coastline of the Pacific Northwest seems lonelier than ever [1]. As a migratory bird species, the Western Sandpiper plays crucial ecological roles as an indicator of ecosystem health and by connecting diverse habitats across continents.
The purpose of this essay is to introduce the ongoing decline of Western Sandpiper populations in recent years, with a particular focus on the population decline in North America. This paper will provide an overview of Western Sandpiper migration and population changes, examine the potential causes behind the dynamics, and analyze the decline’s corresponding ecological effects. I will also explore possible remedies for the issue from the perspectives of habitat restoration, conservation, and legislative measures. The ultimate objective of this essay is to raise awareness and promote action for the ecological conservation of Western Sandpipers before it is too late.
Background
Western Sandpipers are small migratory birds that breed in high Arctic regions of Alaska and Siberia and migrate south to the Pacific coast of North and South America for winter. Their migration is 15,000 kilometers every year along the Pacific Flyway, spanning from Alaska to South America. During winter, their nonbreeding season, they move to coastal areas with mudflats, estuaries, and beaches, which allows the birds to rest and forage for food. In spring, the Western Sandpipers take a similar reverse migration route, stopping at critical habitats along the way until they reach the treeless Arctic tundra. As they fly north, they breed in Northwestern Alaska and Eastern Siberia, and each female lays three to four eggs.
They measure 6 to 7 inches in length and have reddish brown-gold markings on their head and wings. Their most salient features are their slender pointed bills and long legs. The bills are adapted for foraging crustaceans, insects, and mollusks in muddy areas, while their pair of long but thin legs are used for wading in shallow water and sand. These small, darting birds can be seen in tidal areas, foraging in mudflats for invertebrates and biofilms at low and middle tides with other shorebird communities.
Having multiple species foraging together makes shorebirds among the most difficult birds to identify, especially with many species being quite similar in morphology as well as call. As they always smoothly blend into the community, it is not surprising that the population decline of the small Western Sandpipers went unnoticed at first and was reported only when changes in population levels became more obvious.
Causes of Western Sandpiper population decline
The population decline in the Western Sandpiper population has been continuous throughout the past decade. According to the North American Breeding Bird Survey, which monitors populations of breeding birds across the continent, the Western Sandpiper had a relatively stable population trend in the United States from 1966 to 2015, with an annual population decline of 0.1% over this period [2]. In more recent years, a research team in British Columbia, Canada that investigates estuary condition change has noticed the decline in Western Sandpipers inhabiting the Fraser River estuary. Observing the Western Sandpiper population during Northern migration on the Fraser River estuary, the team concluded a 54% decline in Western Sandpipers over the entire study period of 2019 []. The negative trend in migrating Western Sandpipers in North America is consistent with this study in Fraser River. A study using Geolocator wetness data to detect periods of migratory flight examined the status and trends of 45 shorebird species in North America, including the Western sandpiper. The author found that the Western Sandpiper population in the U.S. declined by 37% from 1974 to 2014, with an estimated population of 2,450,000 individuals in 2014 compared to 3,900,000 individuals in 1974.[3]
Currently, on BirdLife International Data Zone, Western Sandpipers have been labeled “least concern” for their wide range of inhabitation, but their population is decreasing. The species faces threats from habitat loss and degradation, pollution, and disturbance, particularly in its wintering and stopover sites along the North American Pacific coast. Habitat loss due to human activities, namely agricultural expansion and oil development, has contributed to the loss and degradation of Western Sandpiper’s breeding, wintering, and stopover habitats. [4] The loss of these habitats has led to reductions in breeding success, migration stopover times, and overwintering survival of Western Sandpipers. Meanwhile, Western Sandpipers are constantly exposed to various pollutants including pesticides, heavy metals, oil spills, and plastics. These contaminants affect Western Sandpiper’s health and reproductive success directly and impact Western Sandpiper’s prey and predators. As habitat loss leads to reduced food resources, Western Sandpipers’ overall health is negatively impacted, making them even more vulnerable to pollutants and contaminants.
Climate change is also expected to have future impacts on the species. One possible shift that climate change can impose is on the timing, intensity, and distribution of precipitation. The precipitation shifts have caused droughts and floods in areas that are breeding and stopover habitats for Western Sandpiper and other shorebirds, leading to reduced breeding success and increased mortality in the Western Sandpiper population. Climate change also imposes effects on sea level, temperature, and the frequency and severity of extreme weather, which can all affect the quality of breeding habitat and food availability for Western Sandpipers.
The interactions between these factors are complex and can lead to a feedback loop of negative impacts on the population. As habitat loss leads to reduced food resources, Western Sandpipers’ overall health is negatively impacted, making them even more vulnerable to pollutants and contaminants.
Effects of Western Sandpiper population decline
The decline of the Western Sandpiper population can have significant impacts on ecosystems. As a migratory shorebird, the Western Sandpiper’s ecological role lies in coastal environments; by preying on invertebrates along the coastal shoreline, the Western Sandpipers control their prey species populations and balance the ecosystem. The decline of the Western Sandpiper population can lead to an increase in their prey species such as polychaete worms and bivalves, which can lead to changes in the composition of other species that prey on similar invertebrates and perturbates the ecosystem’s equilibrium. Furthermore, many predator species, such as falcons and owls, depend on the Western Sandpiper as a food source, and their decline will negatively impact these predator species.
Aside from predator-prey dynamics, Western Sandpipers also forage with many other migratory shorebird species in muddy areas along the coast. These birds, such as the Marble Godwit and the Red Knot, depend on the same stopover habitats as the Western Sandpiper during their own migrations and thus compete for similar resources. As the Western Sandpiper population declines, changing interspecies dynamics will shift the survival and reproductive success of other species, disturbing the equilibrium of the stopover ecosystems.
Western Sandpipers are a popular bird species among birdwatchers and nature enthusiasts, and their migration stopover sites in the Pacific Northwest and Alaska have an important role in ecotourism and its respective economic and cultural values. The economic impact of Western Sandpiper ecotourism in the Copper River Delta, Alaska was evaluated to produce over $1.5 million in revenue and 100 jobs. [5]
Overall, the decline of the Western Sandpiper population can have a complex and far-reaching impact on both the ecosystem and human society. By interacting with native species and migratory species in their natural habitats, the Western Sandpiper’s role deeply interweaves within the ecosystem.
Conservation efforts and solutions
Conservation efforts to protect and restore Western Sandpiper populations are critical in maintaining ecosystem health. One of the main strategies to protect the Western Sandpiper is to conserve their stopover sites and breeding grounds by monitoring and researching invasive species and coastal development. Aside from consistent restoration of degraded habits after human disturbance, prevention of further human development in Western Sandpiper habitats is also critical in maintaining the habitat’s health.
Educating the public about the importance of Western Sandpipers and their habitats is a crucial aspect in raising awareness and gaining support for conservation efforts. Outreach such as public lectures, bird festivals, and school tours are great opportunities to connect humans to the beautiful avian community and improve public consciousness regarding ecosystem conservation. An example includes the Monterey Bay Birding Festival, which is an annual festival in California during shorebird fall migration season. This festival promotes awareness of shorebirds with its educational workshops and bird tours.[6]
Currently, conservation efforts of shorebird populations face limitations in funding and coordination. Significant funding efforts are required to restore what has been lost, but limited budgets restrict the scope and effectiveness of conservation approaches. In addition, since conservation efforts are implemented on a site-by-site basis, there is a need for improved coordination among different agencies to solve problems together. Potential solutions to the need for adequate funding and coordination are the implementation of stronger policies of avian conservation and habitat conservation as well as the encouragement of sustainable tourism and outreach efforts.
CONCLUSION
The Western Sandpiper population in North American tidal areas has been experiencing a significant decline in recent years, largely due to human activities and subsequent climate change. Population changes of this small, long-legged shorebird affect many species that interact and co-exist with them in the coastal ecosystem. They are one of the most abundant shorebird species in North America and play a vital part in the ecological and cultural values along the coast. Population dynamics vary year to year and between different populations, and increasing efforts in the monitoring and conservation of the Western Sandpiper community and their respective habitats is essential to ensuring the species’ survival. We need to investigate the causes behind the population’s decline in recent years and take action before the negative effects have gone too far and these ballerinas of the beach are unable to recover.
REFERENCES
[1] Andres, B., Smith, B. D., Morrison, R. I. G., Gratto-Trevor, C., Brown, S. C., Friis, C. A., … Paquet, J. (2013). Population estimates of North American shorebirds, 2012. Wader Study Group Bulletin, 119, 178-194.
[2] The Cornell Lab of Ornithology. (n.d.). Western Sandpiper Overview, All About Birds, Cornell Lab of Ornithology. Cornell University. https://www.allaboutbirds.org/guide/Western_Sandpiper/overview
[3] The Wader Study Group. (n.d.). Geolocator Wetness Data Accurately Detect Periods of Migratory Flight in Two Species of Shorebird. https://www.waderstudygroup.org/article/9619/
[4] Smith, B. D., Andres, B. A., & Morrison, R. I. G. (2017). Declines in shorebird populations in North America. Wader Study, 124(1), 1-11.
[5] Vogt, D. F., Hopey, M. E., Mayfield, G. R. III, Soehren, E. C., Lewis, L. M., Trent, J. A., & Rush, S. A. (2012). Stopover site fidelity by Tennessee warblers at a southern Appalachian high-elevation site. The Wilson Journal of Ornithology, 124(2), 366-370. https://doi.org/10.1676/11-107.1
[6] Cornell Lab of Ornithology. (2019, September 24). Monterey Bay Festival of Birds [Web log post]. All About Birds. https://www.allaboutbirds.org/news/event/monterey-bay-festival-of-birds/#
[7] Haig, S. M., Kaler, R. S. A., & Oyler-McCance, S. J. (2014). Causes of contemporary population declines in shorebirds. The Condor, 116(4), 672-681.
[8] Kallenberg, M. (2021). The 121st Christmas Bird Count in California. Audubon. https://www.audubon.org/news/the-121st-christmas-bird-count-california
[9] Reiter, P. (2001). Climate change and mosquito-borne disease. Environmental Health Perspectives, 109(1). https://doi.org/10.1289/ehp.01109s1141
[10] Sandpipers Go with the Flow: Correlations … – Wiley Online Library. (n.d.). Wiley Online Library. https://doi.org/10.1002/ece3.7240
[11] The Wader Study Group. (n.d.). Comparison of Shorebird Abundance and Foraging Rate Estimates from Footprints, Fecal Droppings an,d Trail Cameras. https://www.waderstudygroup.org/article/13389/
[12] US Fish and Wildlife Service. (2022). Western Sandpiper (Calidris mauri). https://www.fws.gov/species/western-sandpiper-calidris-mauri
[13] Wamura, T., Iwamura, T., & Possingham, H. P. (2013). Migratory connectivity magnifies the consequences of habitat loss from sea-level rise for shorebird populations. Proceedings of the Royal Society B, 280(1761), 20130325. https://doi.org/10.1098/rspb.2013.0325
Review of Literature: Use of Deep Learning for Cancer Detection in Endoscopy Procedures
By Nitya Lorber, Biology and Human Physiology ’23
Author’s Note: I think now more than ever, the reality of artificial intelligence is knocking on our doors. We are already seeing how the use of AI programs are becoming more and more normalized for our daily use. AI is now driving our cars, talking to us through chatbots, and opening our phones with facial recognition. Frankly, I find it both incredible and intimidating having an artificial and computerized program making decisions with the intent of modeling the reasoning capabilities of the human mind. As an aspiring oncologist, I was really interested to see how AI is being used in the healthcare system, specifically in the field of oncology. So when my biological sciences writing class asked me to write a literature review on a topic of my choice, it was a no brainer – no AI needed. I hope that readers of this review can come away with a sense of comfort that AI is being used for improving cancer detection to potentially save lives.
ABSTRACT
Deep learning is a new technological science programmed to emulate and broaden human intellect [1]. With technological improvements and the development of state-of-the-art machine learning algorithms, the applications are endless for deep learning in medicine, specifically in the field of oncology. Several facilities worldwide train deep learning to recognize lesions, polyps, neoplasms, and other irregularities that may suggest the potential presence of various cancers. For colorectal cancers, deep learning can help with the early detection during colonoscopies, increasing adenoma detection rate (ADR) and decreasing adenoma miss rate (AMR), both essential indicators of colonoscopy quality. For gastrointestinal cancers, deep learning systems, such as ENDOANGEL, GRAIDS, and A-CNN, can help in early detection, giving patients a higher chance of survival. Further research is required to evaluate how these programs will perform in a clinical setting as a potential secondary tool for diagnosis and treatment.
INTRODUCTION
Artificial intelligence is the ability of a computer to execute functions generally linked to human intelligence, such as the ability to reason, find meaning, summarize information, or learn from experience [2]. Over the years, computer computing power has significantly improved, and its progress has provided several opportunities for machine learning applications in medicine [1]. Generally, deep learning in medicine utilizes machine learning models to search medical data and highlight pathways to improve the health and well-being of the patient, most commonly through physician decision support and medical imaging analysis [3]. Machine intelligence collects data and identifies pixel-level features from microimaging structures, which are easily overlooked or invisible to the naked eye [1, 4]. Deep learning is a subfield of machine learning that uses artificial neural networks to learn patterns and relationships in data. Its basic structure involves trained interconnected nodes or “neurons” organized into layers [1]. What sets deep learning apart from other types of machine learning is the depth of the neural network, which allows it to learn increasingly complex features and relationships in the data. The field of oncology has begun to incorporate deep learning in their screenings for cancers by training deep learning to recognize lesions, polyps, neoplasms, and other irregularities that may suggest the potential presence of various cancers, including lung, breast, and skin cancers. In an experimental trial setting, deep learning has shown its ability to aid in early cancer detection for a variety of cancers, specifically colorectal and gastrointestinal cancers, and although few studies show its performance in clinical settings, preliminary studies illustrate promising results for future deep learning applications in revolutionizing oncology today. The traditional approach to detecting colorectal and gastrointestinal cancers is through screening endoscopy procedures, which allow physicians to view internal structures [5-8]. Colonoscopies are a type of endoscopy that inserts a long flexible tube called the colonoscope into the rectum and large intestine to detect abnormalities, such as precancerous and cancerous lesions [7-9]. Advancing diagnostic sensitivity and accuracy of cancer detection through deep learning helps save lives by catching the disease before it progresses too far [1, 4].
DETECTION OF COLORECTAL CANCERS
Colorectal cancers (CRC), cancers of the colon and rectum, have the second highest cancer death rate for men and women worldwide [5]. Frequent colonoscopy and polypectomy screening can reduce the occurrence and mortality from CRC by up to 68% [5, 7]. However, several significant factors determine colonoscopy quality: the number of polyps and adenomas found during colonoscopy, procedural factors such as bowel preparation, morphological characteristics of the lesion, and most importantly, the endoscopist [5-8]. The performance of the endoscopist can vary for several factors, including the level of training, technical and cognitive skills, knowledge, and years of experience inspecting the colorectal mucosa to recognize polypoid (elevated) and non-polypoid (non-elevated) lesions [6, 7].
The most essential and reliable performance indicator for individual endoscopists is their adenoma detection rate (ADR) [5, 6]. ADR is the percentage of average-risk screening colonoscopies in which one or more adenomatous colorectal lesions are found, quantifying the endoscopists’ sensitivity for detecting CRC neoplasia [5, 7]. ADR is inversely related to incidence and mortality of CRC after routine colonoscopies [5-7]. Another performance indicator commonly used to investigate differences between endoscopists or technologies is the adenoma miss rate (AMR), calculated in sets of two repeated colonoscopies on the same subject and by finding the number of lesions missed in the first trial but found in the second [7]. The issue with the current approach to detecting CRC is the variability in performance, leading to widely diverse ADRs and AMRs amongst endoscopists. This variability often results in missed polyps and overlooked adenomatous lesions in patients, which can have serious consequences [5-8].
DEEP LEARNING IN COLONOSCOPIES
Deep learning provides a possible solution to the endoscopist performance variability problem. Deep learning could provide a standardized approach to colonoscopy imaging that would help eliminate inaccuracies generated by endoscopists who may have been distracted, exhausted, or less experienced [6, 8]. Over the past few years, several studies have analyzed deep learning’s impact on endoscopy quality (i.e. ADR, AMR) and how it plays a role in reducing the rate of CRCs. Convolutional neural networks (CNNs) succeed in image analysis tasks, including finding and categorizing lesions [5]. In addition, another experimental approach involves developing a computer-aided detection (CADe) system using an original CNN-based algorithm for assisting endoscopists in detecting colorectal lesions during colonoscopy [7]. Overall, deep learning systems can improve endoscopy quality and possibly reduce the CRC death rate by increasing ADR and polyp detection rates in the general population [5-8].
The known fact that deep learning can increase ADR has led to several subsequent studies on how this technology may impact our current system. For instance, it was not previously known how the increase of ADR by deep learning relates to physician experience. In trying to determine this relationship, Repici A, et al. (2022) discovered that both experienced and non-experienced endoscopists displayed a similar ADR increase during routine colonoscopies with CADe assistance compared to those without CADe assistance [6]. Surprisingly, this study concluded that deep learning was a significant factor for the ADR score, while also finding that the level of experience of the endoscopist was not [6]. Along with increasing ADR, Kamba et al. 2021 explored how deep learning would impact AMR and found a reduced AMR in colonoscopies conducted with CADe assistance compared to standard colonoscopies [7]. This study further confirmed conclusions made by Repici A, et al., saying endoscopists of all experiences using CADe will benefit from the reduced AMR and increased ADR [6, 7].
Moreover, deep learning is exceptionally well-trained in detecting flat lesions, which are often overlooked by endoscopists [6-8]. In evaluating deep learning use for detecting Lynch Syndrome (LS), the most common hereditary CRC syndrome, Hüneburg R, et al. found a higher detection rate of flat adenomas using deep learning compared to the High-Definition White-Light Endoscopy (HD-WLE), a standard protocol commonly used to examine polyps [8]. However, unlike other studies, the overall ADR was not significantly different between deep learning and HD-WLE groups, most likely from the study’s small sample size and exploratory nature [8]. This study was not the only one to observe a lack of significant increase in ADR. Zippelius C, et al. (2022) sought to assess the accuracy and diagnostic performance of a commercially available deep learning system named the GI Genius system in real-time colonoscopy [5]. Although the GI Genius system performs well in daily clinical practice and could very well reduce performance variability and increase overall ADR in less experienced endoscopists [8], it performed no better than that of expert endoscopists [5]. Overall, deep learning demonstrated to be superior or equal to standard colonoscopy performance, but never worse [5-8].
DETECTION OF UPPER GASTROINTESTINAL CANCERS
Upper gastrointestinal cancers, including esophageal and gastric cancer, are among the highest-ranked malignancies and causes of cancer-related deaths worldwide [4, 10, 11]. Of these, gastric cancer is the fifth most common form of cancer and the third leading cause of cancer-related deaths worldwide, with approximately 730,000 deaths each year [10,11]. Most upper gastrointestinal cancers are diagnosed at late stages in cancer because their signs and symptoms go unnoticed or are too general to produce a correct prognosis [10]. On the other hand, if these cancers are detected early, the 5-year survival rate of patients can exceed 90% [10, 11]. To diagnose gastrointestinal cancers, endoscopists must first conduct esophagogastroduodenoscopy (EGD) procedures examining upper gastrointestinal lesions to first find the early gastric cancer (EGC) [4, 11]. However, similar to colonoscopies, endoscopists require long-term specialized training and experience to accurately detect the difficult-to-see EGC lesions with EGD [4, 11]. EGD quality varies significantly by the endoscopist performance, and consequently impacts patient health [4, 10-11]. Because of the subjective, operator-dependent nature of endoscopy diagnosis, many patients are at risk of leaving their endoscopy examinations with undetected suspicious upper gastrointestinal cancers, especially if they are in less developed remote regions [10]. The rates of undetected upper gastrointestinal cancers go as high as 25.8%, and 73% of these cases resulted from endoscopists’ mistakes, such as the inability to detect a specific lesion or by mischaracterizing the lesion as benign during a biopsy [11]. There is a dire need for improved endoscopy quality and reliability as current tests rely too greatly on endoscopist knowledge and experience, creating too great of a variable for EGC detection [10, 11].
DEEP LEARNING IN ENDOSCOPIES
Deep learning systems may effectively monitor blind spots during EGDs, but very little research on deep learning applications in upper gastrointestinal cancers was conducted before 2019 [4, 11]. Previously, deep learning had been mainly used to distinguish between neoplastic, or monoclonal, and non-neoplastic, or polyclonal, lesions [10, 11]. However, CNNs were not among the researched algorithms, and the then-examined systems could not sufficiently distinguish between malignant and benign lesions [10, 11]. The first functional deep learning system to specifically detect gastric cancer was the 2019 “original convolutional neural network” (O-CNN), but this system had a low statistical precision, rendering it unviable for clinical practice [11]. This prior lack of research led to the development of three deep learning systems that could be used to detect and diagnose upper gastrointestinal cancers in hopes of catching the disease in its early stages to help the patient best: GRAIDS, ENDOANGEL, and A-CNN.
The first deep learning system developed and validated was the Gastrointestinal Artificial Intelligence Diagnostic System (GRAIDS), a deep learning semantic segmentation model capable of providing the first real-time automated detection of upper gastrointestinal cancers [10]. Luo H, et al. (2019) trained GRAIDS to detect suspicious lesions during endoscopic examination using over one million endoscopy images from six hospitals of different experiences across China [10]. GRAIDS is designed to provide real-time assistance for diagnosing upper gastrointestinal cancers during endoscopies as well as for retrospectively assessing the images [10]. In the study, Luo H, et al. (2019) found that GRAIDS could detect upper gastrointestinal cancers retrospectively and in a prospective observational setting with high accuracy and specificity [10]. GRAIDS’s high sensitivity is similar to that of expert endoscopists. However, GRAIDS cannot recognize some gastric contours delineated by experts leading to an increased risk of false positives, suggesting that this system is most effective as a secondary tool [10]. GRAIDS is seen as a cost-effective method for early cancer detection that can help endoscopists of every experience level [10].
The second deep learning diagnostic system is called Advanced Convolutional Neural Network (A-CNN), an upgraded version of O-CNN developed by Namikawa K, et al. (2020) [11]. Improving upon its predecessor, A-CNNs were able to successfully distinguish gastric cancers from gastric ulcers with high accuracy, sensitivity, and specificity [11]. This upgraded system is an essential improvement because gastric ulcers are often mistaken for cancer, leading to unnecessary cancer treatments for the patient. A-CNN can now help endoscopists in early diagnosis, improving survival rates of gastric cancers [11]. In addition, this program also helps to standardize the endoscopy approach to assuage some of the endoscopist performance variability [11].
The third deep learning system is ENDOANGEL, developed by Wu L, et al. (2021). Like A-CNN, ENDOANGEL is an upgrade of an older algorithm derived from CNNs called WISENSE [4]. Before the update, WISENSE illustrated the ability to monitor blind spots and create phosphodocumentation in real time during EGD [4]. Compared to WISENSE, ENDOANGEL achieved real-time monitoring during EGD with fewer endoscopic blind spots, a longer inspection time, and EGC detection with high accuracy, sensitivity, and specificity [4]. The deep learning program shows potential for detecting EGC in real clinical settings [4].
FUTURE IMPROVEMENT IN DEEP LEARNING DEVELOPMENT
Because deep learning is a moderately new technology, much of the available research is prospective. These studies attempt to determine if deep learning is a possible approach to reducing endoscopist performance variability. However, most require further research to illustrate how this technology will be used in a clinical setting. For example, most studies involving deep learning systems that were not commercially available and were conducted in highly specialized centers cannot indicate deep learning’s performance for lesion detection in daily clinical practice on different populations around the world [4, 6-7, 10-11]. Additionally, studies need to incorporate a greater patient sample size before they can be generalized to a larger population [7, 8]. Lastly, researchers should still consider endoscopist performance in their trials to explore every option and ensure each patient will get the same treatment no matter who their physician may be or their personal views and acceptance of deep learning technology [4, 5, 8]. These preliminary studies show potential, but the systems need improvement and research before they can be used as standalone options [4, 10-11].
CONCLUSION
Overall, deep learning has demonstrated impressive ability in detecting colorectal and gastrointestinal cancers in experimental trial settings. Deep learning provides a more standardized approach to conducting colonoscopies and endoscopies that may help to homogenize efficient screenings for every patient, regardless of their endoscopist. In colorectal cancers, studies have illustrated increased ADR and decreased AMR using machine learning. In gastrointestinal studies, deep learning has shown its ability to detect cancer just as well as expert endoscopists. Despite these advances, neural networks can only partially improve the cancer detection problem at hand. Even if neural networks improve the overall accuracy and sensitivity of cancer screenings, it will be useless if patients do not get their recommended cancer screenings at the recommended time. At the moment, human intervention is still required, in conjunction with deep learning support, to give the patient their most accurate results. It still needs to be fully understood how deep learning will perform in clinical settings as a secondary tool locally and globally. However, the preliminary studies discussed in this review illustrate promising results for future deep learning applications in revolutionizing oncology today.