The Fungus Among Us: Fungal Presence in Cancerous Growths
By Mirabel Sprague Burleson, Biological Sciences ‘24
Cancer can contaminate nearly every tissue in the human body, arising from complex and diverse mutations that impact many genes. It’s incredibly widespread and lethal and is currently one of the most common causes of death in the United States, second only to heart disease by a small margin [1]. Despite the immense resources poured into cancer research, it remains a major issue in modern medicine. One barrier to the advancement of cancer treatments is finding a way to successfully treat cancer cells without harming the healthy cells surrounding them. Common treatments such as chemical and radiation therapy damage normally functioning cells, resulting in dangerous and debilitating side effects.
In order to develop a treatment that is capable of successfully isolating cancer cells, researchers must understand the hallmarks of cancer. Knowing the distinct characteristics of cancerous cells, such as abnormal division rate, increased mobility, and irregular organelles, allows researchers to develop treatments that attack these specific biological targets. Modern research focuses on determining unique bacterial presence in cancerous cells, as bacteria are an easy target for many treatments.
Recent studies on bacterial presence in cancer found metabolically active cancer-specific communities of bacteria in tumor tissues, which lead to their inclusion in updated cancer hallmarks [2]. In the wake of these findings, Narunsky-Haziza et al. (2022) conducted a study to determine if fungi could also be detected in tumor tissues [3]. Fungal presence in cancer cells could provide a new target for treatments.
The Narunsky-Haziza et al. study sources samples from four independent cohorts: The Weizmann Institute of Science (WIS), The Cancer Genome Analysis (TCGA), Johns Hopkins University, and the University of California at San Diego (UCSD) [3]. Narunsky-Haziza et al. took 17,401 tissue, blood, and plasma samples across 35 cancer types from these four cohorts. 104 samples made of a waxy substance called paraffin and 191 DNA-extraction negative controls were added to the WIS cohort samples to account for potential contamination by environmental fungi or fungal DNA introduced during handling and processing (other cohorts’ samples had adequate controls for fungal presence) [3,4,5]. These samples were then reexamined by Narunsky Haziza et al. for fungal presence with internal transcribed spacer 2 (ITS2) amplicon sequencing [3].
The ITS2 region of nuclear ribosomal DNA is considered one of the best DNA barcodes for sequencing because of its variability between even very closely related species and the ease of amplification [6]. ITS2 amplicon sequencing allows researchers to examine the ITS2 region and identify variations between samples. Narunsky-Haziza et al. use this method to cross-examine known fungal sequences and the sequences found in the samples to identify the different fungal nucleic acids present in the samples’ mycobiomes (fungal microbiome) [3].
Using ITS2 amplicon sequencing, this study found that while tumor bacterial richness is much higher than fungal richness, there was a clear presence of fungi in the samples examined [3]. Fungi were detected in all 35 cancer types examined, although not all individual tumors were positive for fungal signals [3]. Most fungi were found to be within cancer and immune cells, similar to bacterial presence [3]. Interestingly, significant correlations were found between specific fungal presence and tumor types, immunotherapy, age, and smoking habits; however, whether this is correlated or casually associated is yet to be determined [3]. Also, an unexpected significant positive correlation between fungal and bacterial richness was found in bone, breast, brain, and lung samples, though not in any of the others [3].
This study does present several caveats. For one, differences in sample preparation, sequencing, bioinformatic pipelines, and reference databases exist between the four cohorts, which affect bacteriome analyses. Another potential issue is that although there was a large number of samples included, the stages of cancer across samples were different for all four of the cohorts, which created high variability in the data [4,5]. The WIS and TCGA cohorts also showed high variation in mycobiome richness, which Narunsky-Haziza et al. suspect is likely due to the negative controls introduced to the WIS cohort as well as potential split reads found in the TCGA cohort [3,4,5]. A split read is a sequence that partially matches the reference genome in at least two places, but has no continuous alignment to it. Split reads can indicate a difference in structure between the sample and the reference genome.
Additionally, while four different staining methods were used to find fungal presence and tumor-specific localization patterns, they proved to have differing sensitivities across cancer types. As all of the staining methods used can only detect certain subsets of the fungal kingdom, a relatively high false-negative rate can be expected. In contrast, although each cohort used negative controls, some false-positive results are inevitable [3].
Although this study successfully broadened the cancer microbiome landscape, these findings do not establish any causality in the presence of fungal nucleic acids. Narunsky-Haziza et al. hope that this first pan-cancer mycobiome atlas will serve as a key player in informing future cancer research to help characterize new information for cancer diagnostics and therapeutics [3]. While it remains unclear if fungal DNA plays a role in cancer development or severity, with further research, fungal presence could prove to be a helpful biomarker and potentially provide advancement in cancer treatments for the benefit of patients worldwide.
Could Training the Nose Be the Solution to Strange Post COVID-19 Odors?
By Bethiel Dirar, Human Biology ’24
Author’s Note: I wrote this article as a New York Times-inspired piece in my UWP102B course, Writing in the Disciplines: Biological Sciences. Having chosen the topic of parosmia treatments as a writing focus for the class, I specifically discuss olfactory training in this article. In the midst of the pandemic, this condition caught my attention once I found out about it through social media. It had me wondering what it would be like to struggle to enjoy food post-COVID infection. I simply hope that readers learn something new from this article!
Ask someone who has had COVID-19 if they’ve had issues with their sense of smell, and they may very well say yes. According to the CDC, one of the most prevalent symptoms of the respiratory disease is loss of smell [1]. However, there is a lesser understood nasal problem unfolding due to COVID-19: parosmia. Parosmia, as described by the University of Utah, is a condition in which typically pleasant or at least neutral smelling foods become displeasing or repulsive to smell and taste [2].
As a result of this condition, the comforts and pleasures of having meals, snacks, and drinks disappear. Those who suffer from this condition have shared their experiences through TikTok. In one video that has amassed 9.1 million views, user @hannahbaked describes how parosmia has severely impacted her physical and mental health. She tearfully explains how water became disgusting to her, and discloses hair loss and a reliance on protein shakes as meal replacements.
The good news, however, is that researchers have now identified a potential solution to this smelly situation that does not involve drugs or invasive procedures: this solution is olfactory training.
A new study shows that rehabilitation through olfactory training could allow patients with parosmia induced by COVID-19 to return to enjoying their food and drink. Olfactory training is a therapy in which pleasant scents are administered nasally [3].
Modified olfactory training was explored in a 2022 study as a possible treatment for COVID-19-induced parosmia. Aytug Altundag, MD and the other researchers of the study recruited 75 COVID-19 patients with parosmia from the Acibadem Taksim Hospital in Turkey and sorted them into two different groups. One group received modified olfactory training and another group served as a control and received no olfactory training [3]. Modified olfactory training differs from classical olfactory training (COT) in that it expands the number of scents used beyond COT’s four scents: rose, eucalyptus, lemon, and cloves [4 ]. These four scents were popularized in olfactory training use as they represent different categories of odor (floral, resinous, fruity, and spicy, respectively) [5].
For 36 weeks, the treatment group was exposed to a total of 12 scents twice a day that are far from foul. In each 12-week period, four scents were administered. For the first 12 weeks, they started with smelling eucalyptus, clove, lemon, and rose. During the next 12 weeks, the next set of scents were administered: menthol, thyme, tangerine, and jasmine. To round it off for the last 12 weeks, they smelled green tea, bergamot, rosemary, and gardenia scents. Throughout the study, the subjects would smell a scent for 10 seconds, then wait 10 seconds before smelling the next scent. The subjects completed the five minute training sessions around breakfast time and bedtime. [3].
To evaluate the results of the study, the researchers implemented a method known as the Sniffin’ Sticks test. This test combines an odor threshold test, odor discrimination test, and odor identification test, to form a TDI (threshold, discrimination, identification) score. According to the test, the higher the score is, the more normal the state of an individual’s olfactory perception is. A composite score between 30.3 and the maximum score of 48 indicates normal olfactory function while scores below 30.3 point to olfactory dysfunction [3].
The results of this research are promising. By the ninth month of the study, a statistically significant difference in average TDI scores had been found between the group that received modified olfactory training and the control group (27.9 versus 14) [3]. This has led the researchers to believe that with prolonged periods of the therapy, olfactory training could soon become a proven treatment for COVID-19-induced parosmia.
With this conclusion, there is greater hope now for those living with this smell distortion. Fifth Sense, a UK charity focusing on smell and taste disorders, has spotlighted stories emphasizing the need for effective treatments for parosmia. One member of the Fifth Sense community and sufferer of parosmia, 24-year-old Abbie, discussed the struggles of dealing with displeasing odors. “I ended up losing over a stone in weight very quickly because I was skipping meals, as trying to find food that I could eat became increasingly challenging,” she recounted to Fifth Sense [6].
If olfactory training becomes an effective treatment option, eating and drinking might no longer be a battle for those with parosmia. Countless people suffering from the condition will finally experience an improvement in their quality of life so desperately needed, especially with COVID becoming endemic.
REFERENCES:
- Centers for Disease Control and Prevention. Symptoms of COVID-19. Accessed November 20, 2022. Available from: https://www.cdc.gov/coronavirus/2019-ncov/symptoms-testing/symptoms.html
- University of Utah Office of Public Affairs. Parosmia after COVID-19: What Is It and How Long Will It Last? Accessed November 20, 2022. Available from: https://healthcare.utah.edu/healthfeed/postings/2021/09/parosmia.php
- Altundag Aytug, Yilmaz Eren, Caner Kesimli, Mustafa. 2022. Modified Olfactory Training Is an Effective Treatment Method for COVID-19 Induced Parosmia. The Laryngoscope [Internet]. 132(7):1433-1438. doi:10.1002/lary.30101
- Yaylacı Atılay, Azak Emel, Önal Alperen, Ruhi Aktürk Doğukaan, and Karadenizli Aynur. 2022. Effects of classical olfactory training in patients with COVID-19-related persistent loss of smell. EUFOS [Internet]. 280(2): 757–763. doi:10.1007/s00405-022-07570-w
- AbScent. Rose, lemon, clove and eucalyptus. Accessed February 5, 2023. Available from: https://abscent.org/insights-blog/blog/rose-lemon-clove-and-eucalyptus
- Fifth Sense. Abbie’s Story: Parosmia Following COVID-19 and Tips to Manage It. Accessed November 23, 2022. Available from: https://www.fifthsense.org.uk/stories/abbies-story-covid-19-induced-parosmia-and-tips-to-manage-it/
Western Sandpiper Population Decline on the Pacific Coast of North America
By Emma Hui, Biological Sciences ‘26
INTRODUCTION
The migration of Western Sandpipers from the high Arctics to Southern California has always been a treasured gem in the fall. Yet as decades roll by, Western Sandpiper populations have been in continuous decline, and the rugged coastline of the Pacific Northwest seems lonelier than ever [1]. As a migratory bird species, the Western Sandpiper plays crucial ecological roles as an indicator of ecosystem health and by connecting diverse habitats across continents.
The purpose of this essay is to introduce the ongoing decline of Western Sandpiper populations in recent years, with a particular focus on the population decline in North America. This paper will provide an overview of Western Sandpiper migration and population changes, examine the potential causes behind the dynamics, and analyze the decline’s corresponding ecological effects. I will also explore possible remedies for the issue from the perspectives of habitat restoration, conservation, and legislative measures. The ultimate objective of this essay is to raise awareness and promote action for the ecological conservation of Western Sandpipers before it is too late.
Background
Western Sandpipers are small migratory birds that breed in high Arctic regions of Alaska and Siberia and migrate south to the Pacific coast of North and South America for winter. Their migration is 15,000 kilometers every year along the Pacific Flyway, spanning from Alaska to South America. During winter, their nonbreeding season, they move to coastal areas with mudflats, estuaries, and beaches, which allows the birds to rest and forage for food. In spring, the Western Sandpipers take a similar reverse migration route, stopping at critical habitats along the way until they reach the treeless Arctic tundra. As they fly north, they breed in Northwestern Alaska and Eastern Siberia, and each female lays three to four eggs.
They measure 6 to 7 inches in length and have reddish brown-gold markings on their head and wings. Their most salient features are their slender pointed bills and long legs. The bills are adapted for foraging crustaceans, insects, and mollusks in muddy areas, while their pair of long but thin legs are used for wading in shallow water and sand. These small, darting birds can be seen in tidal areas, foraging in mudflats for invertebrates and biofilms at low and middle tides with other shorebird communities.
Having multiple species foraging together makes shorebirds among the most difficult birds to identify, especially with many species being quite similar in morphology as well as call. As they always smoothly blend into the community, it is not surprising that the population decline of the small Western Sandpipers went unnoticed at first and was reported only when changes in population levels became more obvious.
Causes of Western Sandpiper population decline
The population decline in the Western Sandpiper population has been continuous throughout the past decade. According to the North American Breeding Bird Survey, which monitors populations of breeding birds across the continent, the Western Sandpiper had a relatively stable population trend in the United States from 1966 to 2015, with an annual population decline of 0.1% over this period [2]. In more recent years, a research team in British Columbia, Canada that investigates estuary condition change has noticed the decline in Western Sandpipers inhabiting the Fraser River estuary. Observing the Western Sandpiper population during Northern migration on the Fraser River estuary, the team concluded a 54% decline in Western Sandpipers over the entire study period of 2019 []. The negative trend in migrating Western Sandpipers in North America is consistent with this study in Fraser River. A study using Geolocator wetness data to detect periods of migratory flight examined the status and trends of 45 shorebird species in North America, including the Western sandpiper. The author found that the Western Sandpiper population in the U.S. declined by 37% from 1974 to 2014, with an estimated population of 2,450,000 individuals in 2014 compared to 3,900,000 individuals in 1974.[3]
Currently, on BirdLife International Data Zone, Western Sandpipers have been labeled “least concern” for their wide range of inhabitation, but their population is decreasing. The species faces threats from habitat loss and degradation, pollution, and disturbance, particularly in its wintering and stopover sites along the North American Pacific coast. Habitat loss due to human activities, namely agricultural expansion and oil development, has contributed to the loss and degradation of Western Sandpiper’s breeding, wintering, and stopover habitats. [4] The loss of these habitats has led to reductions in breeding success, migration stopover times, and overwintering survival of Western Sandpipers. Meanwhile, Western Sandpipers are constantly exposed to various pollutants including pesticides, heavy metals, oil spills, and plastics. These contaminants affect Western Sandpiper’s health and reproductive success directly and impact Western Sandpiper’s prey and predators. As habitat loss leads to reduced food resources, Western Sandpipers’ overall health is negatively impacted, making them even more vulnerable to pollutants and contaminants.
Climate change is also expected to have future impacts on the species. One possible shift that climate change can impose is on the timing, intensity, and distribution of precipitation. The precipitation shifts have caused droughts and floods in areas that are breeding and stopover habitats for Western Sandpiper and other shorebirds, leading to reduced breeding success and increased mortality in the Western Sandpiper population. Climate change also imposes effects on sea level, temperature, and the frequency and severity of extreme weather, which can all affect the quality of breeding habitat and food availability for Western Sandpipers.
The interactions between these factors are complex and can lead to a feedback loop of negative impacts on the population. As habitat loss leads to reduced food resources, Western Sandpipers’ overall health is negatively impacted, making them even more vulnerable to pollutants and contaminants.
Effects of Western Sandpiper population decline
The decline of the Western Sandpiper population can have significant impacts on ecosystems. As a migratory shorebird, the Western Sandpiper’s ecological role lies in coastal environments; by preying on invertebrates along the coastal shoreline, the Western Sandpipers control their prey species populations and balance the ecosystem. The decline of the Western Sandpiper population can lead to an increase in their prey species such as polychaete worms and bivalves, which can lead to changes in the composition of other species that prey on similar invertebrates and perturbates the ecosystem’s equilibrium. Furthermore, many predator species, such as falcons and owls, depend on the Western Sandpiper as a food source, and their decline will negatively impact these predator species.
Aside from predator-prey dynamics, Western Sandpipers also forage with many other migratory shorebird species in muddy areas along the coast. These birds, such as the Marble Godwit and the Red Knot, depend on the same stopover habitats as the Western Sandpiper during their own migrations and thus compete for similar resources. As the Western Sandpiper population declines, changing interspecies dynamics will shift the survival and reproductive success of other species, disturbing the equilibrium of the stopover ecosystems.
Western Sandpipers are a popular bird species among birdwatchers and nature enthusiasts, and their migration stopover sites in the Pacific Northwest and Alaska have an important role in ecotourism and its respective economic and cultural values. The economic impact of Western Sandpiper ecotourism in the Copper River Delta, Alaska was evaluated to produce over $1.5 million in revenue and 100 jobs. [5]
Overall, the decline of the Western Sandpiper population can have a complex and far-reaching impact on both the ecosystem and human society. By interacting with native species and migratory species in their natural habitats, the Western Sandpiper’s role deeply interweaves within the ecosystem.
Conservation efforts and solutions
Conservation efforts to protect and restore Western Sandpiper populations are critical in maintaining ecosystem health. One of the main strategies to protect the Western Sandpiper is to conserve their stopover sites and breeding grounds by monitoring and researching invasive species and coastal development. Aside from consistent restoration of degraded habits after human disturbance, prevention of further human development in Western Sandpiper habitats is also critical in maintaining the habitat’s health.
Educating the public about the importance of Western Sandpipers and their habitats is a crucial aspect in raising awareness and gaining support for conservation efforts. Outreach such as public lectures, bird festivals, and school tours are great opportunities to connect humans to the beautiful avian community and improve public consciousness regarding ecosystem conservation. An example includes the Monterey Bay Birding Festival, which is an annual festival in California during shorebird fall migration season. This festival promotes awareness of shorebirds with its educational workshops and bird tours.[6]
Currently, conservation efforts of shorebird populations face limitations in funding and coordination. Significant funding efforts are required to restore what has been lost, but limited budgets restrict the scope and effectiveness of conservation approaches. In addition, since conservation efforts are implemented on a site-by-site basis, there is a need for improved coordination among different agencies to solve problems together. Potential solutions to the need for adequate funding and coordination are the implementation of stronger policies of avian conservation and habitat conservation as well as the encouragement of sustainable tourism and outreach efforts.
CONCLUSION
The Western Sandpiper population in North American tidal areas has been experiencing a significant decline in recent years, largely due to human activities and subsequent climate change. Population changes of this small, long-legged shorebird affect many species that interact and co-exist with them in the coastal ecosystem. They are one of the most abundant shorebird species in North America and play a vital part in the ecological and cultural values along the coast. Population dynamics vary year to year and between different populations, and increasing efforts in the monitoring and conservation of the Western Sandpiper community and their respective habitats is essential to ensuring the species’ survival. We need to investigate the causes behind the population’s decline in recent years and take action before the negative effects have gone too far and these ballerinas of the beach are unable to recover.
REFERENCES
[1] Andres, B., Smith, B. D., Morrison, R. I. G., Gratto-Trevor, C., Brown, S. C., Friis, C. A., … Paquet, J. (2013). Population estimates of North American shorebirds, 2012. Wader Study Group Bulletin, 119, 178-194.
[2] The Cornell Lab of Ornithology. (n.d.). Western Sandpiper Overview, All About Birds, Cornell Lab of Ornithology. Cornell University. https://www.allaboutbirds.org/guide/Western_Sandpiper/overview
[3] The Wader Study Group. (n.d.). Geolocator Wetness Data Accurately Detect Periods of Migratory Flight in Two Species of Shorebird. https://www.waderstudygroup.org/article/9619/
[4] Smith, B. D., Andres, B. A., & Morrison, R. I. G. (2017). Declines in shorebird populations in North America. Wader Study, 124(1), 1-11.
[5] Vogt, D. F., Hopey, M. E., Mayfield, G. R. III, Soehren, E. C., Lewis, L. M., Trent, J. A., & Rush, S. A. (2012). Stopover site fidelity by Tennessee warblers at a southern Appalachian high-elevation site. The Wilson Journal of Ornithology, 124(2), 366-370. https://doi.org/10.1676/11-107.1
[6] Cornell Lab of Ornithology. (2019, September 24). Monterey Bay Festival of Birds [Web log post]. All About Birds. https://www.allaboutbirds.org/news/event/monterey-bay-festival-of-birds/#
[7] Haig, S. M., Kaler, R. S. A., & Oyler-McCance, S. J. (2014). Causes of contemporary population declines in shorebirds. The Condor, 116(4), 672-681.
[8] Kallenberg, M. (2021). The 121st Christmas Bird Count in California. Audubon. https://www.audubon.org/news/the-121st-christmas-bird-count-california
[9] Reiter, P. (2001). Climate change and mosquito-borne disease. Environmental Health Perspectives, 109(1). https://doi.org/10.1289/ehp.01109s1141
[10] Sandpipers Go with the Flow: Correlations … – Wiley Online Library. (n.d.). Wiley Online Library. https://doi.org/10.1002/ece3.7240
[11] The Wader Study Group. (n.d.). Comparison of Shorebird Abundance and Foraging Rate Estimates from Footprints, Fecal Droppings an,d Trail Cameras. https://www.waderstudygroup.org/article/13389/
[12] US Fish and Wildlife Service. (2022). Western Sandpiper (Calidris mauri). https://www.fws.gov/species/western-sandpiper-calidris-mauri
[13] Wamura, T., Iwamura, T., & Possingham, H. P. (2013). Migratory connectivity magnifies the consequences of habitat loss from sea-level rise for shorebird populations. Proceedings of the Royal Society B, 280(1761), 20130325. https://doi.org/10.1098/rspb.2013.0325
Review of Literature: Use of Deep Learning for Cancer Detection in Endoscopy Procedures
By Nitya Lorber, Biology and Human Physiology ’23
Author’s Note: I think now more than ever, the reality of artificial intelligence is knocking on our doors. We are already seeing how the use of AI programs are becoming more and more normalized for our daily use. AI is now driving our cars, talking to us through chatbots, and opening our phones with facial recognition. Frankly, I find it both incredible and intimidating having an artificial and computerized program making decisions with the intent of modeling the reasoning capabilities of the human mind. As an aspiring oncologist, I was really interested to see how AI is being used in the healthcare system, specifically in the field of oncology. So when my biological sciences writing class asked me to write a literature review on a topic of my choice, it was a no brainer – no AI needed. I hope that readers of this review can come away with a sense of comfort that AI is being used for improving cancer detection to potentially save lives.
ABSTRACT
Deep learning is a new technological science programmed to emulate and broaden human intellect [1]. With technological improvements and the development of state-of-the-art machine learning algorithms, the applications are endless for deep learning in medicine, specifically in the field of oncology. Several facilities worldwide train deep learning to recognize lesions, polyps, neoplasms, and other irregularities that may suggest the potential presence of various cancers. For colorectal cancers, deep learning can help with the early detection during colonoscopies, increasing adenoma detection rate (ADR) and decreasing adenoma miss rate (AMR), both essential indicators of colonoscopy quality. For gastrointestinal cancers, deep learning systems, such as ENDOANGEL, GRAIDS, and A-CNN, can help in early detection, giving patients a higher chance of survival. Further research is required to evaluate how these programs will perform in a clinical setting as a potential secondary tool for diagnosis and treatment.
INTRODUCTION
Artificial intelligence is the ability of a computer to execute functions generally linked to human intelligence, such as the ability to reason, find meaning, summarize information, or learn from experience [2]. Over the years, computer computing power has significantly improved, and its progress has provided several opportunities for machine learning applications in medicine [1]. Generally, deep learning in medicine utilizes machine learning models to search medical data and highlight pathways to improve the health and well-being of the patient, most commonly through physician decision support and medical imaging analysis [3]. Machine intelligence collects data and identifies pixel-level features from microimaging structures, which are easily overlooked or invisible to the naked eye [1, 4]. Deep learning is a subfield of machine learning that uses artificial neural networks to learn patterns and relationships in data. Its basic structure involves trained interconnected nodes or “neurons” organized into layers [1]. What sets deep learning apart from other types of machine learning is the depth of the neural network, which allows it to learn increasingly complex features and relationships in the data. The field of oncology has begun to incorporate deep learning in their screenings for cancers by training deep learning to recognize lesions, polyps, neoplasms, and other irregularities that may suggest the potential presence of various cancers, including lung, breast, and skin cancers. In an experimental trial setting, deep learning has shown its ability to aid in early cancer detection for a variety of cancers, specifically colorectal and gastrointestinal cancers, and although few studies show its performance in clinical settings, preliminary studies illustrate promising results for future deep learning applications in revolutionizing oncology today. The traditional approach to detecting colorectal and gastrointestinal cancers is through screening endoscopy procedures, which allow physicians to view internal structures [5-8]. Colonoscopies are a type of endoscopy that inserts a long flexible tube called the colonoscope into the rectum and large intestine to detect abnormalities, such as precancerous and cancerous lesions [7-9]. Advancing diagnostic sensitivity and accuracy of cancer detection through deep learning helps save lives by catching the disease before it progresses too far [1, 4].
DETECTION OF COLORECTAL CANCERS
Colorectal cancers (CRC), cancers of the colon and rectum, have the second highest cancer death rate for men and women worldwide [5]. Frequent colonoscopy and polypectomy screening can reduce the occurrence and mortality from CRC by up to 68% [5, 7]. However, several significant factors determine colonoscopy quality: the number of polyps and adenomas found during colonoscopy, procedural factors such as bowel preparation, morphological characteristics of the lesion, and most importantly, the endoscopist [5-8]. The performance of the endoscopist can vary for several factors, including the level of training, technical and cognitive skills, knowledge, and years of experience inspecting the colorectal mucosa to recognize polypoid (elevated) and non-polypoid (non-elevated) lesions [6, 7].
The most essential and reliable performance indicator for individual endoscopists is their adenoma detection rate (ADR) [5, 6]. ADR is the percentage of average-risk screening colonoscopies in which one or more adenomatous colorectal lesions are found, quantifying the endoscopists’ sensitivity for detecting CRC neoplasia [5, 7]. ADR is inversely related to incidence and mortality of CRC after routine colonoscopies [5-7]. Another performance indicator commonly used to investigate differences between endoscopists or technologies is the adenoma miss rate (AMR), calculated in sets of two repeated colonoscopies on the same subject and by finding the number of lesions missed in the first trial but found in the second [7]. The issue with the current approach to detecting CRC is the variability in performance, leading to widely diverse ADRs and AMRs amongst endoscopists. This variability often results in missed polyps and overlooked adenomatous lesions in patients, which can have serious consequences [5-8].
DEEP LEARNING IN COLONOSCOPIES
Deep learning provides a possible solution to the endoscopist performance variability problem. Deep learning could provide a standardized approach to colonoscopy imaging that would help eliminate inaccuracies generated by endoscopists who may have been distracted, exhausted, or less experienced [6, 8]. Over the past few years, several studies have analyzed deep learning’s impact on endoscopy quality (i.e. ADR, AMR) and how it plays a role in reducing the rate of CRCs. Convolutional neural networks (CNNs) succeed in image analysis tasks, including finding and categorizing lesions [5]. In addition, another experimental approach involves developing a computer-aided detection (CADe) system using an original CNN-based algorithm for assisting endoscopists in detecting colorectal lesions during colonoscopy [7]. Overall, deep learning systems can improve endoscopy quality and possibly reduce the CRC death rate by increasing ADR and polyp detection rates in the general population [5-8].
The known fact that deep learning can increase ADR has led to several subsequent studies on how this technology may impact our current system. For instance, it was not previously known how the increase of ADR by deep learning relates to physician experience. In trying to determine this relationship, Repici A, et al. (2022) discovered that both experienced and non-experienced endoscopists displayed a similar ADR increase during routine colonoscopies with CADe assistance compared to those without CADe assistance [6]. Surprisingly, this study concluded that deep learning was a significant factor for the ADR score, while also finding that the level of experience of the endoscopist was not [6]. Along with increasing ADR, Kamba et al. 2021 explored how deep learning would impact AMR and found a reduced AMR in colonoscopies conducted with CADe assistance compared to standard colonoscopies [7]. This study further confirmed conclusions made by Repici A, et al., saying endoscopists of all experiences using CADe will benefit from the reduced AMR and increased ADR [6, 7].
Moreover, deep learning is exceptionally well-trained in detecting flat lesions, which are often overlooked by endoscopists [6-8]. In evaluating deep learning use for detecting Lynch Syndrome (LS), the most common hereditary CRC syndrome, Hüneburg R, et al. found a higher detection rate of flat adenomas using deep learning compared to the High-Definition White-Light Endoscopy (HD-WLE), a standard protocol commonly used to examine polyps [8]. However, unlike other studies, the overall ADR was not significantly different between deep learning and HD-WLE groups, most likely from the study’s small sample size and exploratory nature [8]. This study was not the only one to observe a lack of significant increase in ADR. Zippelius C, et al. (2022) sought to assess the accuracy and diagnostic performance of a commercially available deep learning system named the GI Genius system in real-time colonoscopy [5]. Although the GI Genius system performs well in daily clinical practice and could very well reduce performance variability and increase overall ADR in less experienced endoscopists [8], it performed no better than that of expert endoscopists [5]. Overall, deep learning demonstrated to be superior or equal to standard colonoscopy performance, but never worse [5-8].
DETECTION OF UPPER GASTROINTESTINAL CANCERS
Upper gastrointestinal cancers, including esophageal and gastric cancer, are among the highest-ranked malignancies and causes of cancer-related deaths worldwide [4, 10, 11]. Of these, gastric cancer is the fifth most common form of cancer and the third leading cause of cancer-related deaths worldwide, with approximately 730,000 deaths each year [10,11]. Most upper gastrointestinal cancers are diagnosed at late stages in cancer because their signs and symptoms go unnoticed or are too general to produce a correct prognosis [10]. On the other hand, if these cancers are detected early, the 5-year survival rate of patients can exceed 90% [10, 11]. To diagnose gastrointestinal cancers, endoscopists must first conduct esophagogastroduodenoscopy (EGD) procedures examining upper gastrointestinal lesions to first find the early gastric cancer (EGC) [4, 11]. However, similar to colonoscopies, endoscopists require long-term specialized training and experience to accurately detect the difficult-to-see EGC lesions with EGD [4, 11]. EGD quality varies significantly by the endoscopist performance, and consequently impacts patient health [4, 10-11]. Because of the subjective, operator-dependent nature of endoscopy diagnosis, many patients are at risk of leaving their endoscopy examinations with undetected suspicious upper gastrointestinal cancers, especially if they are in less developed remote regions [10]. The rates of undetected upper gastrointestinal cancers go as high as 25.8%, and 73% of these cases resulted from endoscopists’ mistakes, such as the inability to detect a specific lesion or by mischaracterizing the lesion as benign during a biopsy [11]. There is a dire need for improved endoscopy quality and reliability as current tests rely too greatly on endoscopist knowledge and experience, creating too great of a variable for EGC detection [10, 11].
DEEP LEARNING IN ENDOSCOPIES
Deep learning systems may effectively monitor blind spots during EGDs, but very little research on deep learning applications in upper gastrointestinal cancers was conducted before 2019 [4, 11]. Previously, deep learning had been mainly used to distinguish between neoplastic, or monoclonal, and non-neoplastic, or polyclonal, lesions [10, 11]. However, CNNs were not among the researched algorithms, and the then-examined systems could not sufficiently distinguish between malignant and benign lesions [10, 11]. The first functional deep learning system to specifically detect gastric cancer was the 2019 “original convolutional neural network” (O-CNN), but this system had a low statistical precision, rendering it unviable for clinical practice [11]. This prior lack of research led to the development of three deep learning systems that could be used to detect and diagnose upper gastrointestinal cancers in hopes of catching the disease in its early stages to help the patient best: GRAIDS, ENDOANGEL, and A-CNN.
The first deep learning system developed and validated was the Gastrointestinal Artificial Intelligence Diagnostic System (GRAIDS), a deep learning semantic segmentation model capable of providing the first real-time automated detection of upper gastrointestinal cancers [10]. Luo H, et al. (2019) trained GRAIDS to detect suspicious lesions during endoscopic examination using over one million endoscopy images from six hospitals of different experiences across China [10]. GRAIDS is designed to provide real-time assistance for diagnosing upper gastrointestinal cancers during endoscopies as well as for retrospectively assessing the images [10]. In the study, Luo H, et al. (2019) found that GRAIDS could detect upper gastrointestinal cancers retrospectively and in a prospective observational setting with high accuracy and specificity [10]. GRAIDS’s high sensitivity is similar to that of expert endoscopists. However, GRAIDS cannot recognize some gastric contours delineated by experts leading to an increased risk of false positives, suggesting that this system is most effective as a secondary tool [10]. GRAIDS is seen as a cost-effective method for early cancer detection that can help endoscopists of every experience level [10].
The second deep learning diagnostic system is called Advanced Convolutional Neural Network (A-CNN), an upgraded version of O-CNN developed by Namikawa K, et al. (2020) [11]. Improving upon its predecessor, A-CNNs were able to successfully distinguish gastric cancers from gastric ulcers with high accuracy, sensitivity, and specificity [11]. This upgraded system is an essential improvement because gastric ulcers are often mistaken for cancer, leading to unnecessary cancer treatments for the patient. A-CNN can now help endoscopists in early diagnosis, improving survival rates of gastric cancers [11]. In addition, this program also helps to standardize the endoscopy approach to assuage some of the endoscopist performance variability [11].
The third deep learning system is ENDOANGEL, developed by Wu L, et al. (2021). Like A-CNN, ENDOANGEL is an upgrade of an older algorithm derived from CNNs called WISENSE [4]. Before the update, WISENSE illustrated the ability to monitor blind spots and create phosphodocumentation in real time during EGD [4]. Compared to WISENSE, ENDOANGEL achieved real-time monitoring during EGD with fewer endoscopic blind spots, a longer inspection time, and EGC detection with high accuracy, sensitivity, and specificity [4]. The deep learning program shows potential for detecting EGC in real clinical settings [4].
FUTURE IMPROVEMENT IN DEEP LEARNING DEVELOPMENT
Because deep learning is a moderately new technology, much of the available research is prospective. These studies attempt to determine if deep learning is a possible approach to reducing endoscopist performance variability. However, most require further research to illustrate how this technology will be used in a clinical setting. For example, most studies involving deep learning systems that were not commercially available and were conducted in highly specialized centers cannot indicate deep learning’s performance for lesion detection in daily clinical practice on different populations around the world [4, 6-7, 10-11]. Additionally, studies need to incorporate a greater patient sample size before they can be generalized to a larger population [7, 8]. Lastly, researchers should still consider endoscopist performance in their trials to explore every option and ensure each patient will get the same treatment no matter who their physician may be or their personal views and acceptance of deep learning technology [4, 5, 8]. These preliminary studies show potential, but the systems need improvement and research before they can be used as standalone options [4, 10-11].
CONCLUSION
Overall, deep learning has demonstrated impressive ability in detecting colorectal and gastrointestinal cancers in experimental trial settings. Deep learning provides a more standardized approach to conducting colonoscopies and endoscopies that may help to homogenize efficient screenings for every patient, regardless of their endoscopist. In colorectal cancers, studies have illustrated increased ADR and decreased AMR using machine learning. In gastrointestinal studies, deep learning has shown its ability to detect cancer just as well as expert endoscopists. Despite these advances, neural networks can only partially improve the cancer detection problem at hand. Even if neural networks improve the overall accuracy and sensitivity of cancer screenings, it will be useless if patients do not get their recommended cancer screenings at the recommended time. At the moment, human intervention is still required, in conjunction with deep learning support, to give the patient their most accurate results. It still needs to be fully understood how deep learning will perform in clinical settings as a secondary tool locally and globally. However, the preliminary studies discussed in this review illustrate promising results for future deep learning applications in revolutionizing oncology today.
Willowbrook’s Hepatitis Study
By J Capone, Agriculture and Environmental Education Major, ’24
The Willowbrook State School was a housing institution created by New York State in 1947 to house intellectually disabled children and young adults. At the time, there were little to no public resources for caregivers, and state schools like Willowbrook were created to address that problem. Conditions at Willowbrook were horrific – rampant disease, neglect, and abuse meant most individuals lived sad lives on the grounds of the institution. Robert Kennedy, then-senator of New York, described it as a “snake pit” after his unplanned visit in 1965 exposed horrific conditions. Dr. Saul Krugman, a professor of epidemiology, was brought to Willowbrook in 1955 to control and mitigate the spread of infectious diseases that spread like wildfire through the halls. However, Krugman didn’t necessarily prevent the spread of disease; in some cases, explicitly encouraged it amongst residents, so he could test his theories on possible treatments. Thus, the Willowbrook Study was born.
Built on Staten Island to originally house 4,000 patients, the institution quickly swelled in population to 6,000, peaking in 1969 at 6,200 [10]. There were often not enough resources, including basic clothing and staff, to go around. Conditions at Willowbrook were grim; some 60% of patients were not toilet trained, and others could not feed or bathe themselves [9]. Abuse and neglect ran rampant, along with infectious diseases that patients suffered from due to unsanitary conditions. Facing these problems, the directors of the institution hired Dr. Saul Krugman from NYU medical school in 1955 to deal with the state of endemic diseases facing patients at Willowbrook. Dr. Krugman began by implementing an epidemiological survey to calculate the extent of the problem. These surveys discovered that children had a 1 in 2 chance of catching hepatitis in their first year at Willowbrook [6]. Further surveys showed 90% of patients had markers for hepatitis A, indicating a previous infection [6]. Hepatitis A is a disease that affects the liver, causing jaundice, loss of appetite, abdominal pain, and is usually spread via fecal to oral contamination. Dr. Krugman’s studies also discovered the strain of hepatitis B, a sexually transmitted disease that lived in both adult and child populations in Willowbrook. In an institution where over half of the population was not toilet trained with not enough caretakers to go around, it was unquestionable how hepatitis became such an intense and ingrained problem in Willowbrook’s halls [3].
Faced with these grim conditions, Dr. Krugman came to a conclusion; discovering a way to inoculate children against hepatitis would help not only those institutionalized but could reap benefits around the world. In short, he wanted to use the conditions already established at Willowbrook to create a vaccine for hepatitis. Gathering consent from parents, Dr. Krugman and his team ran several experiments to observe the benefits of gamma globulin injections, a type of antibody-containing blood plasma, from those who have recovered from hepatitis into those who have not yet been infected [1]. Some experiments exposed children to hepatitis with no administered globulin injections; others were injected and then exposed to the virus; and some were injected and were never exposed to the virus, all carefully observed by the medical team. One way the study exposed children to the hepatitis virus was by taking the feces of infected residents and combining it with chocolate milk for the study’s participants to consume unwittingly [10]. The children who participated in these studies were housed separately from the rest of the patients, in newer, cleaner facilities with round-the-clock care, while the other residents of Willowbrook still lived in squalor [6]. Krugman’s studies later encapsulated measles and rubella studies, resulting in his 20+ year stay as a medical director at Willowbrook. Only much later in his career did he receive backlash from the medical community regarding his study participants and methods.
Krugman claims he did not choose Willowbrook because he could prey on vulnerable disabled children. Instead, he argues that his experiments helped the children at Willowbrook and defended their morality and ethics until the day he died. He argued hepatitis was already rampant at Willowbrook; anyone who came to live there was bound to get it at some point. Under his lab’s controlled experiments, Dr. Krugman argued, the patients had better care and a better chance of survival than in the main facilities at Willowbrook [6]. In the study, residents had the opportunity to become immune to hepatitis without ever falling ill and facing its dangerous effects through Dr. Krugman’s experiments. Even if they did, they had access to excellent care from his team of doctors and nurses. However, Krugman missed a key component of any study’s ethical backing. Purposefully infecting children with hepatitis, no matter how “likely” it is they will get it in the future, puts them needlessly in harm’s way. No matter how well cared for the patients were, hepatitis is known to be a potentially fatal disease. Additionally, the infection method of feces-contaminated food and drink is a deplorable method that hints at possibly worse conditions of the Willowbrook study. Nobody, no matter what they’ve consented to, should be unwittingly drinking human excrement. This egregious violation of moral and ethical standards, even to professionals in the 1950s and 60s, shows how poorly these patients were treated. Every physician takes an oath to first do no harm. Dr. Krugman violated this with his treatment of intellectually disabled children at Willowbrook State School.
Other issues regarding the Willowbrook Study were the methods of informed consent. On paper, everything seemed fine. Potential patients’ parents were given a tour of the facility, allowed to ask questions to the researchers, and were given various forms of education on the purpose of the study. They met with a social worker and were allowed to discuss the study with their private doctors if they wanted to. Parents knew they could withdraw from the study at any time. No orphan or ward of the state was allowed to become a potential participant in the hepatitis studies. Dr. Krugman also asserts that their methods of informed consent were innovative, and paved the way for the standard of human experimentation. However, there is evidence to suggest that the undue influence of admittance to Willowbrook is a further kink in Krugman’s ethical armor. Willowbrook was already inundated but was one of the only state institutions for intellectually disabled children operating in New York State. Parents believed that the only way to get their child into the selective school was by allowing them to participate in the hepatitis study, as that gave them priority registration. Otherwise, these caregivers had practically no options. State institutions were some of the only interventions available at the time. Willowbrook already had a huge waitlist while operating at 1.5x capacity. Although sending your child to an overpopulated institution may not seem like the best option, sometimes it was all these families had to provide care for their loved ones. This instance of undue influence further establishes how unethical Dr. Krugman’s study was.
What brought the downfall to a 20-year study? Geraldo Rivera, an up-and-coming investigative reporter, created a documentary airing on ABC in 1972 showing the horrible abuse and neglect people at the Willowbrook State School had to endure. Willowbrook: The Last Great Disgrace led a call-to-arms from the people of Staten Island, who were horrified such an atrocity could occur on their very shores. A class-action lawsuit was filed that same year, with a final ruling that Willowbrook had to begin closing procedures in 1975. Around this time, other members of the medical field started criticizing Dr. Krugman’s studies and questioned the necessity of human experimentation for immunization studies. Dr. Krugman’s later studies developing hepatitis B vaccines using chimpanzees created conflict in the medical community, as chimpanzees are considered an acceptable model for human epidemiology studies for vaccine development, yet Krugman decided on human trials before primates. However, even after the study ended at Willowbrook, Dr, Krugman was still lauded for his advancements in the epidemiology field with not just hepatitis, but also developments of measles and rubella vaccinations later in his time at Willowbrook. Until the very end, Krugman defended his choices and decisions regarding the treatment and methods used at Willowbrook State School.
Public outrage over the conditions at Willowbrook spurred several laws and acts into being over the 70s and 80s. After the 1972 class action lawsuit was settled in 1975, the Willowbrook Consent Decree was signed, stating that the institution had to start deflating its population from around 5,000 to 250 in six years, among other reforms regarding the treatment of patients at the facility. Other acts, such as the Education for All Handicapped Children Act (1975) and the Bill of Rights Act (1975) worked to ensure disabled populations were protected in society. Other programs, such as the Protection and Advocacy System of the Developmental Disabilities Assistance were formed to further preserve the rights of disabled individuals. The Belmont Report, published in 1976, also worked to establish ethical standards regarding human experimentation revolving around their three principles: Respect of Persons, Beneficence, and Justice.
The tragedy of Willowbrook State School is a permanent mark on the scientific community’s mistreatment of human research participants. The unacceptable treatment and conditions that children and adults were forced to face while institutionalized were a disgrace to scientific research. While there have been many scientific discoveries resulting from this study, including the creation of a hepatitis A & B vaccine, the ends never justify the means in human experimentation.
A Warmer World Leading to a Health Decline
By Abigail Lin, Biological Sciences.
INTRODUCTION
Rising temperatures due to global climate change cause several detrimental impacts on the world around us. This paper will analyze the consequences of climate change, specifically temperature changes, within California. Livelihoods of farmers and fishermen, distribution of disease, and fire intensity are examples of how California is affected by this crisis. Climate change in California is especially visible because California dominates the nation’s fruit and nut production, two water-intensive crops. The state’s reliance on large quantities of water to fuel its agricultural system makes it particularly susceptible to drought. Proliferation of detrimental disease vectors, loss of beneficial crops, and elevated levels of dryness imply a complex interaction between California ecosystems and climate change.
Crops
There are many farmers and agricultural workers in California impacted by changing climates, as the state is a major agricultural hotspot. Two-thirds of the nation’s fruits and over one-third of the nation’s vegetables are produced in California [1]. Crops such as apricots, peaches, plums, and walnuts are projected to be unable to grow in 90% or more of the Central Valley by the end of the century because of the increase of disease, pests, and weeds that accompany rising temperatures [1].
Figure 1. Projection of crop failure by the end of the century. Heat increases diseases, pests, and weeds. Plum, apricot, peach and walnut crops will be unable to grow in 90% of Central Valley as a result.
Crop yields significantly decrease when heat sensitive plants are not grown in cool enough conditions. Fruits and nuts require chill hours, when the temperature is between 32 and 45 degrees Fahrenheit, to ensure adequate reproduction and development [2]. However, with increasing temperatures, crops are receiving less chill hours during the winter. California grows 98% of the country’s pistachios, but changes in chill hours have affected fertilization [3]. A study found that pistachios need 700 chill hours each winter, yet there have been less than 500 chill hours over the past four years combined [1]. As a result, in 2015, 70% of pistachio shells were missing the kernel (the edible part of the nut) that should have been inside [3].
Repeated crop failures have also left farmers mentally taxed. Evidence suggests that suicide rates for farmers are already rising in response to farm debt that accumulates in response to poor crop yields [4]. Not only is people’s financial well-being threatened by climate change, but so is their mental health. Mental stress threatens to rise as climates warm around the world, causing economic loss and upheaving agricultural careers.
Crab Fisheries
Crab fisheries and fishers in California are also negatively impacted by the rise in temperatures. Warming oceans have led to an uncontrollable growth of algal blooms, which contaminates crab meat with domoic acid, a potent neurotoxin that causes seizures and memory loss [5]. The spread of this toxin has forced many fisheries to close. California fishers lost over half the crabs they regularly catch per season, and qualified for more than 25 million dollars of federal disaster relief, during 2015 to 2016 [5]. In response to financial loss, fishers adapted by catching seafood species other than crab, moved to locations where algal blooms have not contaminated their catch, or in the worst case, stopped fishing altogether [5]. California crab fishers’ careers have already been dramatically altered by global warming, and the amount of algal blooms will only continue to increase if warming continues.
Disease
Temperature plays a major role in the prevalence of infectious diseases because it increases the activity, growth, development, and reproduction of disease vectors, living organisms that carry infectious agents and transmit them to other organisms. It is predicted that warm, humid climates will allow bacteria and viruses, mosquitoes, flies, and rats (all common disease vectors) to thrive [6]. Most animal disease vectors are r-selected, meaning they put little parental investment into individual offspring, but produce many. Warm temperatures allow r-selected species to grow quickly and reproduce often. However, warm temperatures speed up biochemical reactions and are very energy demanding on organism metabolism [7]. In response, disease vector ectotherms, organisms requiring external sources of heat for controlling body temperature, have successfully adapted to changing temperatures. These organisms thermoregulate, or carry out actions that maintain body temperature [7]. Behavioral thermoregulation has shifted the geographical distribution of infectious diseases as disease vectors move to the warm environments that they favor [7].
Initial models about the distribution and prevalence of disease suggested a net increase of the geographical range of diseases, while more recent models suggest a shift in disease distribution [7]. Recent models recognize that vector species have upper and lower temperature limits that affect disease distribution [7]. It is estimated that by 2050, there will be 23 million more cases of malaria at higher latitudes, where previously infections were nonexistent, but 25 million less cases of malaria at lower latitudes, where previously malaria proliferated rapidly through populations, because the conditions necessary for malaria transmission will shift [7].
Figure 2. Shift of malaria disease distribution by 2050. Higher latitudes will have 23 million more cases of malaria while lower latitudes will have 25 million less cases. Although habitat suitability changed, there is little net change in malaria cases.
Cases of Coccidioidomycosis (Valley fever), an infectious disease spread from inhaling Coccidioides fungal spores, have recently reached record highs in California [8]. Valley fever is especially prevalent in areas experiencing fluctuating climates, vacillating between extreme drought and high precipitation [8]. After studying 81,000 cases collected over 20 years, researchers identified that major droughts have a causal relationship with increasing Coccidioidomycosis transmission rates [8]. Initially, drought will suppress disease transmission because it prevents proliferation of the Coccidioides fungi. However, transmission rebounds in the years following drought because competing bacteria die off in high heat [8]. Fungi have a number of traits that make them more tolerable to drought compared to bacteria including osmolytes for maintaining cell volume, thick cell walls to mitigate water loss, melanin which aids in thermoregulation, and hyphae that extend throughout the soil to forage for water [9]. Disease spikes are seen after drought, such as the wet season between 2016 and 2017, which had about 2,500 more cases of Valley fever in comparison to the previous year. [8].
The role of rising temperatures in increasing Valley fever cases is evident in Kern County, one of the hottest and driest regions of California. Kern Country has the highest Valley fever incident rates in California; 3,390 cases occurred in a 47-month drought from 2012 to 2016 [8]. Kern County has many cases of Valley fever because of its drought-like conditions. As climate change pushes areas throughout California that are usually cool and wet year-round into alternating dry and wet weather conditions, Valley fever cases are projected to increase.
Fires
Climate change is also associated with an increase in fire season intensity. The Western United States experienced three years of massive wildfires from 2020 to 2022, with each year burning more than 1.2 million acres [10]. The ongoing drought has led to an accumulation of dry trees, shrubs, and grasses [10]. A 2016 study found that this increase of dry organic plant material has more than doubled the number of large fires in the Western United States since 1984 [10]. One of the ways that dry matter may ignite is by lightning. Projections show that by 2060, there will be a 30% increase of area burned by lightning-ignited wildfires compared to 2011 [10].
Residents in California are in danger of losing their lives and property to fire damage. A single fire can lead to massive destruction. In 2018, the Woolsey Fire burned 96,949 acres and hundreds of homes, and killed three people [11]. Over one million buildings in California are within high-risk fire zones, and this number is projected to increase as temperatures continue to rise [10]. With the amount of dry organic matter increasing and wildfire incidence surging, there will be more cases of property damage and loss of life in California. High temperatures and extreme weather events make it more likely that people will fall victim to these life-threatening disasters.
CONCLUSION
Increases in global temperature have a negative effect on human physical health and mental wellbeing. Climate change is making it more difficult to secure a livelihood, changing the spread of disease, and destroying lives and property. However, projections about rising temperatures allow farmers the chance to make informed decisions about which crops to grow, fishermen to relocate to areas that are less impacted by algal blooms, health experts to predict when and where outbreaks of certain diseases might occur, and fire protection services to increase their presence in high-risk areas. Projections help people predict where and when a climate change associated event is likely to occur, so that they may hopefully respond quicker and more efficiently. Consequences of climate change can be mitigated by using models as a guide for what to expect in California’s future.
REFERENCES
- James I. 2018. California agriculture faces serious threats from climate change, study finds. The Desert Sun. Accessed January 31, 2023. Available from www.desertsun.com/story/news/environment/2018/02/27/california-agriculture-faces-serious-threats-climate-change-study-finds/377289002/
- U.S. Department of Agriculture. Climate Change and WINTER CHILL. Accessed December 23, 2023. Available from www.climatehubs.usda.gov/sites/default/files/Chill%20Hours%20Ag%20FS%20_%20120620.pdf
- Zhang S. 2015. Time to Add Pistachios to California’s List of Woes. WIRED. Accessed February 15, 2023. Available from www.wired.com/2015/09/time-add-pistachios-californias-list-problems/
- Semuels A. 2019. ‘They’re Trying to Wipe Us Off the Map.’ Small American Farmers Are Nearing Extinction. TIME. Accessed January 31, 2023. Available from time.com/5736789/small-american-farmers-debt-crisis-extinction/
- Gross L. 2021. As Warming Oceans Bring Tough Times to California Crab Fishers, Scientists Say Diversifying is Key to Survival. Inside Climate News. Accessed January 31, 2023. Available from insideclimatenews.org/news/01022021/california-agriculture-crab-fishermen-climate-change/
- Martens P. 1999. How Will Climate Change Affect Human Health? The question poses a huge challenge to scientists. Yet the consequences of global warming of public health remain largely unexplored. Am Scien. 87(6):534–541.
- Lafferty KD. 2009. The ecology of climate change and infectious diseases. Ecol Soc Amer. 90(4):888-900.
- Hanson N. 2022. Climate change drives another outbreak: In California, it’s a spike in Valley fever cases. Courthouse News Service. Accessed March 8, 2023. Available from www.courthousenews.com/climate-change-drives-another-outbreak-in-california-its-a-spike-in-valley-fever-cases/
- Treseder KK, Berlemont R, Allison SD, & Martiny AC. 2018. Drought increases the frequencies of fungal functional genes related to carbon and nitrogen acquisition. PLoS ONE [Internet]. 13(11):e0206441. doi.org/10.1371/journal.pone.0206441
- National Oceanic and Atmospheric Administration. 2022. Wildfire climate connection. Accessed January 31, 2023. Available from www.noaa.gov/noaa-wildfire/wildfire-climate-connection#:~:text=Research%20shows%20that%20changes%20in,fuels%20during%20the%20fire%20season
- Lucas S. 2019. Los Angeles is the Face of Climate Change. OneZero. Accessed January 31, 2023. Available from onezero.medium.com/los-angeles-is-burning-f9fab1c212cb
Floating Photovoltaics (FPVs): Impacts on Algal Growth in Reservoir Systems
By Benjamin Narwold, Environmental Science and Management major ’23
Author’s Note: I wrote this review paper to learn more about the environmental impacts of floating photovoltaics (FPVs) because this topic directly applies to my work as an undergraduate researcher position with the Global Ecology and Sustainability Lab at UC Davis. I wanted to focus specifically on the impacts of FPV on algae because of the biological implications of disturbing ecologically important photosynthesizers in reservoirs. I want readers to develop an understanding of FPVs as a climate change mitigation solution, how these systems may disturb algae, and the uncertainties in whether expected and observed changes in algae growth are beneficial or detrimental to the aquatic environment.
ABSTRACT
Floating photovoltaics (FPVs) are typical photovoltaics mounted on plastic pontoon floats and deployed on man-made water bodies. If FPVs are developed to cover 27% of the surface area of US reservoirs, they would provide 10% of the electricity in the US. Freshwater reservoirs are host to vulnerable ecosystems; therefore, understanding the water quality impacts of FPVs is necessary for sustainable development. This review aimed to fingerprint the impacts of FPVs on reservoir aquatic ecology in terms of algal growth and identify the uncertainties in FPV-induced algae reduction to present our current understanding of the environmental impacts of reservoir-based FPVs. The UC Davis Library database was searched for papers from peer-reviewed journals published from 2018 to 2022 that covered “floating photovoltaics”, “algae reduction”, and “environmental impacts”. A consistent result across studies was that FPVs reduce algal growth by reducing the sunlight entering the host waterbody, and this can disrupt phytoplankton dynamics and have cascading effects on the broader ecosystem. Modeling and experimental approaches found that 40% coverage of the reservoir by FPVs is optimal for energy production while maintaining the necessary algae levels to support the local ecosystem. The lack of research on the ideal percent coverage of FPVs to reduce algal growth but not disrupt ecosystem dynamics emphasizes the need for future research that addresses FPV disturbance of local microclimates, algae response to reduced sunlight, and the corresponding cascading impacts on other organisms dependent on the products of algal photosynthesis.
Keywords: floating photovoltaics, algae reduction, environmental impacts, water and ecology management, energy and water nexus
Caption: Floating photovoltaic (FPV) system in Altamonte Springs, Florida. One of four sites monitored by the Global Ecology and Sustainability Lab for water quality impacts of FPV.
INTRODUCTION
Climate change is a global problem of increasing intensity and poses challenges to food, water, and energy security. Global climate models predict a 2-4°C increase in global temperatures from now until 2100, which will degrade human health and threaten ecosystems [1]. Renewable energy is a critical component of reducing anthropogenic greenhouse gas emissions, and the widespread transition away from fossil fuels is becoming increasingly feasible with new technologies. One of these new renewable energy systems is floating photovoltaics (FPVs), standard photovoltaic (solar panel) modules mounted to a polyethylene pontoon float system, positioned off the water’s surface, and anchored to the bottom or shore of the host waterbody [2]. FPVs represent an intriguing and novel renewable energy solution because they can be deployed on human-constructed water bodies and improve land-use efficiency. Ground-mounted solar projects compete for land against agricultural and urbanization interests, whereas many artificial and semi-natural water bodies, such as wastewater discharge pools, have no conflicting human interests [3]. FPV development thus presents an opportunity to sustainably increase solar energy production without interfering with agricultural and urban development, which will continue to expand as world populations increase. In addition to optimizing land use, FPVs can produce up to 22% more power than conventional solar due to evaporative cooling [4]. The solar panels are located just above the water’s surface, so the local water evaporation contributes to a reduction in solar panel temperature, thus increasing efficiency. Generating electricity using FPVs is intended to augment solar power generation capacity and supply more renewable energy to the grid for households and industry.
Among the most abundantly available space to develop this pivotal land-use optimization and climate change mitigation solution are reservoirs, lakes formed from damming a river for water storage and hydropower production. A GIS analysis found that covering 27% of the surface area of reservoirs in the United States with FPVs would generate enough electricity to meet 9.6% (2116 Gigawatts) of the country’s 2016 energy demands [4]. But reservoirs and similar bodies of water nevertheless represent vulnerable freshwater ecosystems, so developing an understanding of the water quality and species impacts of FPVs represents the primary hurdle to informing sustainable development of these systems.
FPVs reduce the amount of sunlight reaching the surface of their host waterbody, which reduces the amount of evaporative water loss and results in significant changes to algae growth [5]. Several studies have found that FPVs alter phytoplankton dynamics and can have cascading effects on the other organisms in the ecosystem [6–8]. A key agent of uncertainty surrounding reservoir FPVs is determining the equilibrium range of algal growth needed to support reservoir food webs. In some reservoir systems, we see strong summertime algal blooms. An algal bloom is a rapid increase in or overaccumulation of an algal population that can result in oxygen-depleted waterbodies called “dead zones,” where the algae eventually die and decompose [9]. FPV-induced shading can counter harmful algal blooms, providing environmental benefits to augment renewable energy generation. Alternatively, in reservoirs that do not have problematic algal blooms, adding an FPV system may reduce healthy algal populations and cause adverse rippling effects to other species in the ecosystem. Developing an understanding of what percent of the total water surface area of the reservoir covered by FPVs is enough to reduce algal growth and bloom potential but not too large to disrupt ecosystem dynamics will require further research. Specifically, assessing the disturbance of local microclimates caused by FPVs, algae response to reduced sunlight conditions, and the impact on other aquatic species dependent on the ecosystem functioning provided by algae. Due to climate change, we predict an increase in temperature and shifting precipitation patterns; therefore, it is important to contextualize the water quality impacts of FPV and its influence on algae, given this variability.
Figure 1. Impact of FPVs on algal in reservoir ecosystems. FPV-induced shading can provide additional environmental benefits in reservoirs with algal blooms and may cause adverse effects in healthy reservoirs.
Methods
This review surveys what we know regarding the impacts of FPVs on algal growth in reservoir systems. The UC Davis Library database was searched for papers from peer-reviewed journals using the following keywords: “floating photovoltaics,” “algae reduction,” and “environmental impacts.” I looked at experiments on reservoir-based FPVs from 2018-2022 to analyze plot scale impacts on algal growth, quantified with chlorophyll-a monitoring data, and assessed global-scale changes in algal growth from a climate change perspective, with consideration of FPV materials and design. Although a study on crystalline solar cells incorporated in this review is from 2016 and falls outside the 5-year range of focus, it represents a necessary juxtaposition to the semitransparent polymer cell technology. Overall, I analyzed the methods and results of site-specific, laboratory, and global-scale studies to fingerprint the current state of knowledge on the impacts of FPVs on algae and algal blooms to inform reservoir management.
Algal Growth and FPV Coverage Scenarios
Algae are responsible for producing oxygen in the waterbody, and the impact of FPVs on algae growth depends on the percentage of the waterbody covered by the FPV and is measured by looking at chlorophyll-a (ch-a) differences. Ch-a, a pigment present in all photosynthetically active algae, is often used as a proxy measurement to assess algal growth dynamics within a waterbody [10]. Ch-a is measured using optical sensors and wavelengths of light, so it is an indirect measurement of algal concentration. FPVs reduce the amount of sunlight reaching the surface of their host waterbody and disrupt phytoplankton dynamics. Hass et al. (2020) and Wang et al. (2022) investigated different FPV coverage scenarios and used ch-a as a proxy for algal growth. Hass et al. used the ELCOM-CAED model to evaluate ten different FPV coverage scenarios, and Wang et al. simulated 40% coverage relative to 0% coverage control ponds using black polyethylene weaving nets as a proxy for an FPV array. Both the model output and experiment-based approach settled on 40% FPV coverage as an equilibrium development target [7, 11]. The results of these studies show continuity; however, Hass et al. did not consider the difference in absorption wavelength range for different microalgal taxa, and Wang et al. did not use actual solar panels in their experimental design. Additionally, Andini et al. (2022) investigated the difference in algae between 0% and 100% coverage at Mahoni Lake in Indonesia by experimenting with mesocosms, isolated systems that mimic real-world environmental conditions while allowing control for biological composition by taking samples at the same water depths. These researchers found that 100% FPV coverage reduced ch-a between 0 and 1.25 mg/L, average temperature between 0 and 2.5℃, dissolved oxygen between 0 and 1.5 mg/L, and electrical conductivity categorically in the waterbody. However, the researchers only considered directly measured water quality variables and did not assess the long-term trophic consequences of 100% FPV coverage [6]. Clearly, the study was designed to show the polarity between 0% and 100% coverage in terms of several water quality parameters; however, realistic intermediate FPV coverages incorporated into both Hass et al. and Wang et al. were absent from this study. Given these compiled results, future research can continue to work toward the broader question of determining what percent FPV coverage can be applied to a reservoir to maximize energy production and minimize environmental disturbance.
Algal Blooms and Mitigation Potential
Algal blooms are a product of high productivity conditions that favor rapid algae growth, and the shading provided by FPV systems could mitigate the intensity and negative impacts of summertime algal blooms. High productivity conditions include high water temperature, intense sunlight, and abundant nutrients such as nitrogen and phosphorus. The first two variables can be controlled by FPV coverage. In a study of the global change in phytoplankton blooms since the 1980s, Ho et al. (2019) found that most of the 71 large lakes sampled saw an increase in peak summertime bloom intensity over the past three decades, and the lakes that showed improvement in bloom conditions experienced little to no warming. Temperature, precipitation, and fertilizer inputs were the considered variables, and this study could not find significant correspondence of blooms to any of these variables exclusively [12]. This insignificant result suggests a diversity of causal agents on a per-lake basis. Thus, conducting site-specific studies and monitoring these water quality variables will help establish algal bloom causation and the relative intensity of the confounding variables and, therefore, whether FPV coverage would be an effective mitigation agent. If the algae in a reservoir are linked to less-controllable variables like carbon dioxide concentration in the water or nutrient loading from agricultural runoff, FPV-shading will have a negligible effect on algae [6, 7]. Such considerations are critical to informing the potential environmental co-benefits of an FPV installation.
FPV Solar Cell Design
The properties of solar cells within the photovoltaic panels themselves are instrumental in determining what wavelengths of light interact with the surface of the host waterbody under the panels. Crystalline silicon solar cells absorb radiation wavelengths from 300-1300 nm and have a thick active layer of about 300 µm, responsible for high photon absorption [13]. These properties result in opaque solar panels that do not allow photons to travel through the panel and interact with the waterbody. Conversely, semitransparent polymer solar cells (ST-PSCs) represent an alternative material and technological approach, and algae growth can be regulated by engineering the panels to provide specific transmission windows and light intensities. Zhang et al., 2020 found that the growth rate for the algal genus Chlorella was minimized under the opaque treatment; however, the changes in photosynthetic efficiencies did not significantly affect the growth rate of Chlorella during the 24-hour experimentation window. While the researchers were able to show the variability in the number of photons penetrating the panels from 300-1000 nm across three treatments of different layering of material within the ST-PSCs, they were unable to yield a significant result in their study [5]. These results have limited scope because this study was conducted in a lab and did not assess real-sized PV panels in the field; however, it highlights how algae species may prefer different light wavelengths for photosynthesis that may be discontinuous with the wavelengths an FPV system best controls. Therefore, it is vital to coordinate solar panel material design in order to reflect and absorb the primary wavelengths that support algal photosynthesis. The viability of prioritizing this component of FPV is uncertain; however, new materials and technologies are being developed and utilized, and this relationship must be considered as we work to maximize FPV coverage in reservoir systems with minimal ecological complications (Figure 2).
Figure 2. Relationship between FPV transparency, light profiles entering the waterbody and interacting with algae, and FPV coverage optimization. Solar cell design influences light transmission, and photosynthetic rates in algae vary with light wavelength and intensity, providing site-specific design opportunities.
CONCLUSION
FPVs are relatively untapped climate change mitigation solutions and can potentially reduce algae, benefitting water quality in freshwater ecosystems and reservoirs that suffer from strong summertime algal blooms. Algae are critical primary producers in reservoir ecosystems; therefore, areas for future research include microalgae response to the reduced sunlight conditions created by FPVs and the ecological role of algal taxa within the reservoir ecosystem. Further laboratory studies of solar panel designs in this context are needed. Future research on FPVs and water quality must also account for climate change, shifting baselines, and environmental variables. From a reservoir management viewpoint, this includes studying whether reservoirs have lower nutrient loading and whether the algae can be managed with FPV arrays, fingerprinting the inter-reservoir variability to determine where we should spatially place FPV arrays and localize impacts, and further modeling the relationship between warming and algal blooms to understand the long-term effectiveness of FPV-based algae management. Climate change will continue to operate in the background, and energy security issues will intensify. Our understanding of the environmental impacts of FPVs is currently limited to the point where we cannot safely approve and construct these systems on most reservoirs; therefore, future studies are needed to incorporate this modern technology into the global renewable energy portfolio.
REFERENCES
- Ara Begum R, Lempert R, Ali E, Benjaminsen TA, Bernauer T, Cramer W,Cui X, Mach K, Nagy G, Stenseth NC, Sukumar R, Wester P. 2022. Point of Departure and Key Concepts. In: Pörtner HO, Roberts DC, Tignor M, Poloczanska ES, Mintenbeck K, Alegría A, Craig M, Langsdorf S, Löschke S, Möller V, Okem A, Rama B (eds.). Climate Change 2022: Impacts, Adaptation and Vulnerability. Contribution of Working Group II to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge (UK) and New York (NY): Cambridge University Press. 121–196. doi:10.1017/9781009325844.003.
- Energy Sector Management Assistance Program, Solar Energy Research Institute of Singapore. 2019. Where Sun Meets Water: Floating Solar Handbook for Practitioners. Washington, DC (USA): World Bank.
- Cagle AE, Armstrong A, Exley G, Grodsky SM, Macknick J, Sherwin J, Hernandez RR. 2020. The Land Sparing, Water Surface Use Efficiency, and Water Surface Transformation of Floating Photovoltaic Solar Energy Installations. Sustainability [Internet]. 12(19):8154. doi:10.3390/su12198154
- Spencer RS, Macknick J, Aznar A, Warren A, Reese MO. 2019. Floating Photovoltaic Systems: Assessing the Technical Potential of Photovoltaic Systems on Man-Made Water Bodies in the Continental United States. Environ Sci Technol [Internet]. 53(3):1680–1689. doi:10.1021/acs.est.8b04735
- Zhang N, Jiang T, Guo C, Qiao L, Ji Q, Yin L, Yu L, Murto P, Xu X. 2020. High-performance semitransparent polymer solar cells floating on water: Rational analysis of power generation, water evaporation and algal growth. Nano Energy [Internet]. 77:105111. doi:10.1016/j.nanoen.2020.105111
- Andini S, Suwartha N, Setiawan EA, Ma’arif S. 2022. Analysis of Biological, Chemical, and Physical Parameters to Evaluate the Effect of Floating Solar PV in Mahoni Lake, Depok, Indonesia: Mesocosm Experiment Study. J Ecol Eng [Internet]. 23(4):201–207. doi:10.12911/22998993/146385
- Haas J, Khalighi J, de la Fuente A, Gerbersdorf SU, Nowak W, Chen PJ. 2020. Floating photovoltaic plants: Ecological impacts versus hydropower operation flexibility. Energy Conversion and Management. Energy Convers Manag [Internet]. 206:112414. doi:10.1016/j.enconman.2019.112414
- Pimentel Da Silva GD, Branco DAC. 2018. Is floating photovoltaic better than conventional photovoltaic? Assessing environmental impacts. Impact Assess [Internet]. 36(5):390–400. doi:10.1080/14615517.2018.1477498
- Sellner KG, Doucette GJ, Kirkpatrick GJ. 2003. Harmful algal blooms: causes, impacts and detection. J Ind Microbiol Biotechnol [Internet]. 30(7):383–406. doi:10.1007/s10295-003-0074-9
- Pápista É, Ács É, Böddi B. 2002. Chlorophyll-a determination with ethanol – a critical test. Hydrobiologia [Internet]. 485(1):191–198. doi:10.1023/A:1021329602685
- Wang TW, Chang PH, Huang YS, Lin TS, Yang SD, Yeh SL, Tung CH, Kuo SR, Lai HT, Chen CC. 2022. Effects of floating photovoltaic systems on water quality of aquaculture ponds. Aquac Res [Internet]. 53(4):1304–1315. doi:10.1111/are.15665
- Ho JC, Michalak AM, Pahlevan N. 2019. Widespread global increase in intense lake phytoplankton blooms since the 1980s. Nature [Internet]. 574(7780):667–670. doi:10.1038/s41586-019-1648-7
- Battaglia C, Cuevas A, De Wolf S. 2016. High-efficiency crystalline silicon solar cells: status and perspectives. Energy Environ Sci [Internet]. 9(5):1552–1576. doi:10.1039/C5EE03380B
How does prenatal nicotine exposure increase the chance of a child developing asthma?
By Madhulika Appajodu, Cell Biology ’24
Author’s Note: My name is Madhulika Appajodu and I am a 3rd Year Cell Biology major at UC Davis. I am a pre-medical student and hope to go on to medical school. I chose Cell Biology as a major because I found the focus on cell organization and function to be very interesting. I am a volunteer at Shifa Community Clinic and a member of MEDLIFE, SEND4C, and H4H. I am also a BioLaunch Mentor and a Learning Assistant for the Physics Department. I wrote this piece to answer the question: “How does prenatal nicotine exposure increase the risk of asthma in offspring?” I wrote this for undergraduate students in the field of epigenetics/prenatal exposures and experts/professors in the field but also for the general public who have some knowledge in science. I chose this topic in particular because epigenetics interests me greatly. I find that environmental factors likely play a large part in the life outcomes of people who may be genetically similar but grew up in different environments. I hope that readers will understand how important environmental factors are in the grand scheme of physical, emotional, and mental health for not just the reader but their future families (if they choose to have them) health as well.
ABSTRACT:
Previous studies have studied prenatal nicotine exposure and its effects which follow offspring over the course of their lives. One of these effects is asthma. Asthma is a chronic respiratory condition characterized by the narrowing of one’s airways in response to an allergen or irritant. It is a widespread condition, affecting over 25 million people currently in the US alone. The mechanisms of asthma and its causes are currently being investigated. However, researchers agree that prenatal nicotine exposure increases the risk of asthma in offspring exponentially.
There is currently no cure for asthma, only methods to lessen the intensity of asthmatic episodes, such as through the use of an inhaler. This literature review details the mechanisms through which prenatal nicotine exposure increases the risk of asthma in offspring, according to current research. The three potential causes of this increased risk are placental damage, epigenetic alteration, and nicotine exposure alone. The mechanisms will be evaluated through a synthesis of experimental and survey data in mice and human models in studies done in the past seven years. Comparisons will be drawn between articles that cite the same mechanism as the cause of the increased risk of asthma. Once the mechanism(s) are identified, research can be done to identify a solution so asthma due to prenatal nicotine exposure can be prevented.
INTRODUCTION
In the United States, approximately 25 million people are currently diagnosed with asthma [1]. Asthma is a respiratory condition characterized by difficulty breathing due to narrowing airways, caused by inflammation and excess mucus production. This inflammatory response is often triggered by viruses or air-borne allergens. Researchers are currently investigating the underlying immune mechanisms that cause the intense inflammatory response, which is often more intense when someone has been subjected to risk factors such as prenatal nicotine exposure. Since there is currently no cure for asthma, research about the underlying mechanisms of the inflammatory response is vital so that asthma can be prevented rather than simply managed.
Researchers have studied prenatal nicotine exposure and its effects on offspring for decades, focusing on human subjects who smoked while pregnant. Over the past thirty years, there has been a shift toward using animal trials to investigate the mechanisms associated with the risk factors for asthma.
The primary model in asthma research in mice is the house dust mite (HDM) model. The HDM model involves exposing one group of pregnant mice to tobacco smoke-infused air and another group of pregnant mice to filtered air. The offspring of both groups are exposed to house dust mites– a common allergen– and their inflammatory immune response is examined. There are variations to the model, such as exposing the fathers to nicotine prior to mating or exposing the female mice to nicotine prior to or during pregnancy.
Current literature cites three main factors that contribute to an increased risk of asthma: nicotine smoke exposure alone, placental damage induced by nicotine, and epigenetic alterations induced by nicotine. Nicotine passes from the mother’s blood to the fetus through the umbilical cord during pregnancy. Nicotine can also damage the placenta through vasoconstriction of blood vessels and alter the fetus’ epigenetic markers through DNA methylation.
The purpose of this literature review is to examine precisely how prenatal nicotine exposure increases the risk of asthma, first in experimental data using the HDM model and then in experimental & survey data regarding humans.
Prenatal Nicotine Smoke Exposure
In 2015, Eyring et al proposed that nicotine use in pregnant women increased the risk of asthma in offspring through epigenetic alterations [2]. Eyring et al exposed one group of female mice to tobacco infused smoke (ETS) for five weeks and mated them to male mice and examined the offspring. The pregnant female mice were then exposed to ETS until they gave birth. There was also a control group of female mice exposed to filtered air and mated to male mice. The offspring of the ETS exposed group did display an increased inflammatory response when exposed to house dust mites compared to the control group. However, the level of DNA expression of both groups were not statistically different. Thus, Eyring et al. came to the conclusion that prenatal nicotine exposure can cause an increased risk of asthma in offspring, but was unable to identify the mechanism through which prenatal ETS causes an increased inflammatory response [2]. It is possible that the Bisulfite sequencing equipment at the time of Eyring et al.’s study was not sensitive enough to detect the difference in methylation that newer studies observed.
Figure 1. Expression levels of IL-5 (Th2 cytokine producing protein) are the same for the CS (ETS exposed group) and FA (filtered air group) mice when exposed to house dust mites (HDM). This indicated that the gene expression levels were not affected by ETS.
A three-generation survey study on human subjects found a correlation between maternal smoking and the increased risk of asthma in offspring, as well as a correlation between grandmothers smoking during pregnancy and their grandchildren having an increased risk of asthma, regardless of the intermediate generation’s smoking habits [3]. The researchers also found a correlation between paternal smoking and an increased risk of asthma in the offspring [3]. They have hypothesized that paternal smoking causes altered microRNA (miRNA) in the sperm. MiRNA is a nucleic acid that regulates expression of genes. During fertilization, this altered miRNA can change the gene expression of the progeny, increasing the risk of asthma in the offspring. The conclusion of this study is that maternal, paternal, and grandmaternal nicotine exposure is correlated with an increased risk of asthma in offspring. The researchers also proposed epigenetic alteration as the mechanism of increased asthma risk, but due to the nature of the study, they were unable to confirm this hypothesis [3].
Placental Damage
A survey of mothers who smoked and mothers who did not smoke by Zacharasiewicz et al. concluded that prenatal exposure to nicotine causes placental damage by decreasing nutrient delivery to the fetus [4]. Prenatal nicotine exposure decreases alveolar surface area, thereby decreasing the tidal volume of fetal lungs after birth [5]. Tidal volume is the amount of air that enters the lungs per breath. A decreased tidal volume results in less oxygen entering the body under standard conditions and a vastly reduced amount of oxygen entering the body when exposed to an allergen. Placental damage also results in the increased aging of the fetus’ lungs as pulmonary cells perform less glycogenolysis and glycolysis, causing cells to die prematurely [6]. The premature death of lung cells means the lungs are weaker, unable to exchange a normal amount of oxygen, and therefore more prone to intense allergic reactions.
Similarly, a study by Cahill et al. using the HDM mice model found that inhaling nicotine causes vasoconstriction– the narrowing of blood vessels– in the mother, resulting in less oxygen and nutrients delivered to the fetus [7]. They also found that placental HSD2 (a crucial enzyme in fetal development) is decreased when pregnant mothers are exposed to nicotine. Cahill et al also observed placental damage from nicotine use which resulted in decreased birth weights and lung size in fetuses [7]. Decreased lung size leads to intense asthmatic episodes because the airways are smaller and narrower than the airways of an individual not exposed to nicotine prenatally. Ultimately, Zacharasiewicz and Cahill came to the same conclusion that nicotine consumption or exposure in pregnant women increases the risk of asthma in their offspring by negatively affecting the offspring’s lungs [4,7].
Epigenetic Alteration
Researchers agree that DNA methylation is the one of the mechanisms leading to an increased asthma risk [8]. DNA methylation, the primary form of epigenetic alteration that occurs when a fetus is exposed to nicotine, is a chemical reaction where a methyl (-CH3) group is added to a cytosine base. This methyl group prevents transcription factors from binding to DNA and recruiting repression proteins, resulting in underexpressed genes, which in this case is a disproportionate inflammatory response. However, there is disagreement among researchers about which genes are being alternatively methylated. Christensen et al. conducted an HDM mouse study and found that methylation of genes which produce and regulate Th2 cytokines was decreased in the offspring of mothers exposed to ETS [9]. Cytokines are small proteins that regulate the immune response; Th2 cells produce cytokines that encourage inflammation. Thus the increased expression of Th2 intensifies the inflammatory response that occurs in response to the asthma trigger of house dust mites. Christensen et al. found that Th1 cytokine levels remained constant and methylation was unaffected [9].
Conversely, Singh et al. found that Th1 cytokine levels decreased due to hypermethylation [10]. Singh et al. did also find that Th2 cytokine levels increased due to hypomethylation, which concurs with the findings of Christensen et al [9-10].
Figure 2. Expression levels of IL-3 (Th2 producing gene) in the groups that were exposed to tobacco infused smoke (SS) or filtered air (FA). There is a statistically significant increase in expression in the SS group indicating a decrease in methylation.
Christensen et al. exposed pregnant female mice to either tobacco smoke-infused air or filtered air and then examined the offspring [9]. Singh et al. exposed both male and female mice to tobacco smoke-infused air or filtered air prior to mating and then examined the offspring [10]. This variation in experimental methods could contribute to the difference seen in the methylation of Th1 cytokine-producing genes. However, both researchers concluded that the nicotine-induced DNA methylation levels changed in genes that produced inflammatory responses to allergens [9-10].
Zakarya et al. found that DNA methylation levels were altered in genes associated with fetal growth and nicotine detoxification [11]. This review examined epigenome-wide association studies (EWAS) on patients suffering from asthma whose mothers smoked or vaped during pregnancy. These studies showed increased methylation in placental, whole blood, and fetal lung genes [12]. These results differed from the research done by Singh et al. and Christensen et al. both in the affected genes and the way that methylation was altered [9-10]. The difference in results can be attributed to the difference between mice and humans as well as the variation in experimental design. Christensen and Singh used the HDM model on mice and controlled the levels of nicotine the mice were exposed to [9-10]. Zakarya et al. used data from children of women who reported smoking during pregnancy [11]. The levels of nicotine that the subjects were exposed to was not controlled and varied greatly. These differences between the studied species and experimental design could explain the different conclusions that the researchers drew.
CONCLUSION
There is not a simple answer about the mechanism by which nicotine use during pregnancy increases the risk of asthma in offspring. However, both epigenetic alterations and placental damage due to nicotine exposure play a role in increased asthma risk.
Research citing nicotine-induced epigenetic alteration as the main cause of the increased risk of asthma identifies various genes being altered by DNA methylation. The HDM studies cited in this review conclude that genes producing cytokines had a decrease in methylation, while a study using human subjects concluded that genes involving fetal growth and nicotine detoxification had an increase in methylation. Further research should determine which altered genes are increasing the risk of asthma so that methylation can be induced or repressed in those genes as a preventive measure for asthma. Further research should also focus on which aspect of nicotine-induced placental damage is the biggest factor in the increased risk of asthma so that a solution can be found to address that aspect.
Future research studies should continue to investigate the two presented mechanisms and identify the factors that are increasing the risk of asthma so that nicotine-induced asthma can be prevented in future generations.
Your genes and you: Examining the effect of direct-to-consumer genetic testing visualizations on conceptions of identity
By Adyasha Padhi, Biochemistry & Molecular Biology and Sociocultural Anthropology ’25
Author’s Note: I wrote this paper for my ANT 109: Visualization in Science Course and we chose a specific visualization and entity connected to it to focus on. 23&Me has always been a company that has interested me and in looking deeper into their business practices, I think that it’s really important that we consider how our identities and our perception of our identity has changed, especially in the 21st century.
Introduction
In recent years, direct-to-consumer (DTC) genetic testing has become widespread, and with it, consumers have had more access to our genetic code than ever before in human history. More than 26 million people—roughly 8% of the US population—have taken at-home DNA tests and as a multi-billion dollar industry, the DTC market is rapidly becoming more widespread. 23&Me, a personal genomics and biotech company based in California, was the first company to begin offering autosomal genetic testing for ancestry, and remains a giant in the field, becoming near ubiquitous in the market of DTC and the minds of many consumers.
23&Me, as they say on their website, aims to provide its customers “DNA testing with the most comprehensive DNA breakdown,” allowing them to “know [their] personal story, in a whole new way.” For consumers who are typically not geneticists themselves, this analysis and breakdown of their DNA is what they are primarily looking for, expecting to receive information on what their genes mean from their ancestry to health. The interpretation and visualization of DNA test results are what nearly all companies operate as their main product and selling point, more specifically, the idea that they can provide the consumer with a way to know themselves better and understand their ancestry and family history on a deeper level.
Because of this, the way that companies create and present this genetic information is paramount to understanding the ways that DTC impacts consumers and the wider society’s conception of ancestry and identity. This review will look at a specific case study of 23&Me’s “Ancestry Composition” visualization, looking into how it is created, interacted with, and what it communicates about ancestry and identity, examining the broader impact of quantitative tools on personal/community identity and how the way our genes impact us on both a biological level and on how our understanding of genes and genetics influences the way that we move through the world.
23&Me’s “Ancestry Composition” Visualization:
Figure 1: A sample “Ancestry Composition” report from 23&Me’s website
23&Me’s “Ancestry Composition” visualization is typical of similar genetic ancestry results in the field and is composed of 3 main parts: a pie-chart representing the consumer’s percent ancestry, a list breaking down those percentages by world region, then by ethnicity or nationality country/ethnic group, and then a map that illustrates the different regions of the world in different colors depending on the ancestry found. This iconography dominates most visual communication about ancestry in this day and age with the rise of DTC.
First, it is important to understand what DNA is. Deoxyribonucleic acid, or DNA for short, is a complex molecule that contains genetic information for the development and functioning of an organism, acting as the hereditary material in nearly all organisms through sequences of nucleotides. DNA in a sense acts as the blueprint that an organism’s cells use to create more cells, growing from a single cell to a fetus and eventually a full human being. As hereditary material, genes are passed from parents to their biological offspring and the complete set of genes or genetic material present in a cell or organism is known as the genome, with genes being organized into chromosomes. DNA that codes for functional molecules called proteins is the most commonly known, however, so-called coding DNA only makes up a tiny percentage of the total genome, only about 1-5%, with the rest composed of non-coding regions. In addition, genetic material is constantly changing through not only mutations but also epigenetic changes. These modify chemical marks on the DNA called the epigenome and change how genes are expressed, and consequently the phenotype of a person, without altering the genetic sequence itself. In some cases, epigenetic changes can be inherited such as through the germ-line transmission of altered epigenomes between generations in the absence of continued environmental exposures (Nilsson 2015). As such, analyzing and drawing conclusions from DNA is a complex process and is not as simple as it may seem.
23&Me goes through a process to take the DNA sample that the consumer provides into a visualization that is accessible to the consumer, translating DNA into ancestry information that they can understand. 23&Me specifically analyzes your DNA by looking for specific genetic variants across your entire genome including autosomal DNA, sex chromosomes, and mitochondrial DNA (mtDNA). The locations in the genome that vary from person to person are called single nucleotide polymorphisms (SNPs for short), with different versions of SNPs called alleles. Everyone carries two alleles at most SNPs, one allele from each parent, and while each single-nucleotide polymorphism only contains a small amount of information, by combining events across many SNPs, their algorithm can develop a picture of your genetic ancestry. It is not the SNPs themselves, but instead their variation over time in populations that can be used to map human migration, isolation, and population development (Henn 2012). As such, ethnicities can’t be determined simply by single genes.
There are six main steps that 23&Me goes through when determining ancestry composition and creating this visualization: preparing for genotyping (amplifying the DNA from the provided sample), training the artificial intelligence algorithms using reference data sets, phasing and determining which genetic information was inherited together on the same chromosome, estimating ancestry for each window of the genome, smoothing window assignments (making adjustments so that the result is more cohesive and understandable), and calibrating and returning the results to the individual in the form of the “Ancestry Composition” visualization (Durand 2021).
Social context of direct-to-consumer genetic ancestry tests
DTC genetic testing addresses a series of existing social desires with new technological means, particularly combining the modern enthusiasm for science with primal interests in asserting the “natural” of one’s identity and postmodern emphasis on radical individualism (Lang and Winkler 2021). Being just among the latest of ways that we as humans have tried to understand our relationships with others, looking into its history can lend insight into the practice in its current form. Throughout history, ancestry has been used to solidify relations and thus power in many societies, such as hierarchical monarchies or caste systems (Lang and Winkler 2021). Biological relations allow membership into communities and into structures of power, so being able to prove ancestry and have a record of ancestry in some way has been important. Fundamentally, as humans, we have always been trying to make sense of ourselves and the world around us.
However, with this desire to organize the world, structures of power and groups who want power arise; the easiest way to gain power is by dividing people up and creating hierarchies. This is where movements such as racism, eugenics, and other movements serve as justification for dehumanization and violence, creating system-driven violence that cannot be easily dismantled as the violence is no longer individual-to-individual but part of a wider pattern of systemic violence. This includes historical slavery, colonialism, and recent racially motivated violence.
Impact of DTC on the social construction of Ancestry & Identity
To understand the impact of DTC on consumer identity, we can start by examining the sociotechnical architecture of 23&Me. The products and visualizations created by DTC companies are often structured in such a way that the user is not provided with sufficient context to understand the results that they receive. As seen in the sample results, there is limited information provided on the most prominent consumer-facing pages, with the results pages primarily showing simply a percentage of the consumer’s DNA associated with a certain heritage. This can be attributed in part to the sociotechnical architecture of 23&Me’s consumer-facing information architecture and UX design more generally. In a similar way that a building’s architecture is an organization of materials and components that together define the building, the sociotechnical architecture of the technology explores how the way that a technology’s technical aspects (its physical system and the task it aims to do) interacts with the social aspects (the structure and organization and how it impacts people cognitively and socially).
While they do disclose the difficulty with quantifying ancestry, their marketing and product presentation do not do enough to recognize the broader socio-cultural and historical context of which they are a part of. Furthermore, compared to similar companies, 23&Me provides as much raw information to its consumers as possible and builds off the idea that a user possesses the expertise and autonomy to determine the reliability/utility of test results presented to them. This absolves them from the responsibility of misinterpretation, which downplays the difficulty of understanding SNP test results (Parthasarathy 2010). As a whole, by presenting the consumer’s results in a very quantitative manner, and pushing these ideas in their marketing while not providing much information in an accessible way near these results, 23&Me’s products can push onto its customers a genetic essentialist bias, cognitive biases arising from exposure to beliefs that genes are relevant for behavior, condition, and social grouping (Dar-Nimrod & Heine 2011). This leads to the erroneous perception that conditions associated with genetic attributions are more immutable, determined, homogenous, and natural.
Another core aspect of this process is its pool of reference genotypes that are used at multiple points throughout the process of visualization production. The groups that are most represented in these reference genotypes are people of European ancestry (Wapner 2020). This is for a range of reasons, one being structures of power that have allowed those populations to have access to those resources and thus their ancestry records and methods of ancestry remembrance preserved. The data and information that these tests provide is not trivial, especially when it comes to 23&Me’s other half, health genetic testing. Therefore, marginalized groups should have more accessibility, representation, and thus accurate utilization of these tools, though it is also important to recognize the flaws in this system and not blindly encourage individuals to seek out giving their data to these companies without understanding the full picture. There are also no genes specifically associated with specific ethnic groups.
More broadly, research investigating the impact of genetic ancestry tests on racial essentialism found that while there was no significant average effect of genetic testing on views of racial essentialism, there were significant differences between individuals with high genetic knowledge versus individuals with the least genetic knowledge. Roth found that “essentialist beliefs significantly declined after testing among individuals with high genetic knowledge, but increased among those with the least genetic knowledge”, and also found that this trend was not impacted by the specific genetic ancestry found, demonstrating that this difference was due to different understanding of genetics (Roth 2020). Recognizing that those who have the least genetic knowledge are those who are most likely to develop essentialist beliefs demonstrates how important it is that education about the process behind genetic testing and how the results are generated is easily accessible and should be more prominent in DTC companies’ products and marketing.
Conclusion
As direct-to-consumer genetic testing becomes more and more prevalent, it is impacting the way that we communicate about and conceptualize ancestry, promoting the construction of essentialist identities through the process of DTC genetic ancestry testing, from the marketing to the final visualization. The impacts of this push disproportionately affect individuals of marginalized communities within wider society and increased education about genetics and how these systems work is essential to combating essentialism, both within the companies themselves and the wider society.
Works Referenced
News Articles:
- Bahrampour, Tara. “They considered themselves white, but DNA tests told a more complex story.” The Washington Post, 6 February 2018, https://www.washingtonpost.com/local/social-issues/they-considered-themselves-white-but-dna-tests-told-a-more-complex-story/2018/02/06/16215d1a-e181-11e7-8679-a9728984779c_story.html.
- Brown, Kristen V. “23andMe to Use DNA Tests to Make Cancer Drugs.” Bloomberg.com, 4 November 2021, https://www.bloomberg.com/news/features/2021-11-04/23andme-to-use-dna-tests-to-make-cancer-drugs
- Copeland, Libby. “Opinion | DNA and Race: What Ancestry and 23andMe Reveal.” The New York Times, 16 February 2021, https://www.nytimes.com/2021/02/16/opinion/23andme-ancestry-race.html.
- Molla, Rani. “What 23andMe and other genetic testing tools can do with your data.” Vox, 13 December 2019, https://www.vox.com/recode/2019/12/13/20978024/genetic-testing-dna-consequences-23andme-ancestry.
- Pomerantz, Dorothy. “23andMe had devastating news about my health. I wish a person had delivered it.” STAT News, 8 August 2019, https://www.statnews.com/2019/08/08/23andme-genetic-test-revealed-high-cancer-risk/.
- Servick, Kelly. “Frustrated U.S. FDA Issues Warning to 23andMe.” Science Insider, 25 November 2013, https://www.science.org/content/article/frustrated-us-fda-issues-warning-23andme.
Scientific Articles:
- Bryc, Katazyna, et al. “The Genetic Ancestry of African Americans, Latinos, and European Americans across the United States.” AJHG, vol. 96, no. 1, 2015, pp. 37-53, https://www.cell.com/ajhg/fulltext/S0002-9297(14)00476-5.
- Durand, Eric Y., et al. “Ancestry Composition: A Novel, Efficient Pipeline for Ancestry Deconvolution.” bioRxiv, 2014, https://www.biorxiv.org/content/biorxiv/early/2014/10/18/010512.full.pdf.
- Durand, Eric Y., et al. “Reducing Pervasive False-Positive Identical-by-Descent Segments Detected by Large-Scale Pedigree Analysis.” Molecular Biology & Evolution, vol. 31, no. 8, 2014, pp. 2212-2222, https://academic.oup.com/mbe/article/31/8/2212/2925728.
- Durand, Eric Y., et al. “A scalable pipeline for local ancestry inference using tens of thousands of reference haplotypes.” bioRxiv, 2021, https://www.biorxiv.org/content/10.1101/2021.01.19.427308v1.
- Henn, Brenna M., et al. “Cryptic Distant Relatives Are Common in Both Isolated and Cosmopolitan Genetic Samples.” Plos One, 2012, https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0034267.
- Henn, Brenna M., et al. “Hunter-gatherer genomic diversity suggests a southern African origin for modern humans.” PNAS, vol. 108, no. 13, 2011, pp. 5154-5162, https://www.pnas.org/doi/full/10.1073/pnas.1017511108.
- Kim, Soyeon, et al. “Shared genetic architectures of subjective well-being in East Asian and European ancestry populations.” Natural Human Behavior, 2022, https://pubmed.ncbi.nlm.nih.gov/35589828/#affiliation-1.
Documentaries:
- DNA Testing: The Promise & the Peril. Performance by Scott Wapner, 2020, https://www.peacocktv.com/watch/asset/tv/dna-testing-the-promise-and-the-peril/55fc9111-fb6e-399f-a921-5c036dfe54f3?orig_ref=https://www.google.com/.
- “Identity | Tribeca.” Tribeca Film Festival, https://tribecafilm.com/studios/identity-short-film-series.
- Gray, Edward. “Secrets in our DNA | NOVA.” PBS, 13 January 2021, https://www.pbs.org/wgbh/nova/video/secrets-in-our-dna/.
STS Articles:
- Abel, Sarah. “Reading DNA ancestry portraits against the grain.” Slaveries and Post-Slaveries, 2020, https://journals.openedition.org/slaveries/2343.
- Boas, Franz. “The Race Problem in Modern Society.” 1909, https://www.jstor.org/stable/1634659#metadata_info_tab_contents.
- Dar-Nimrod, Ilan, and Steven J. Heine. “Genetic Essentialism: On the Deceptive Determinism of DNA.” Psychol Bull., 2011. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3394457/.
- Duello, Theresa M. “Race and genetics versus ‘race’ in genetics – PMC.” NCBI, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8604262/.
- Gelman, Susan A. “Essentialism in everyday thought.” Psychological Science Agenda, 2005. American Psychological Association, https://www.apa.org/science/about/psa/2005/05/gelman.
- Heine, Steven J., et al. “Making Sense of Genetics: The Problem of Essentialism.” Genetic Essentialism and Its Vicissitudes, 2019, https://onlinelibrary.wiley.com/doi/full/10.1002/hast.1013.
- Lang, Alexander, and Florian Winkler. “Co-constructing ancestry through direct-to-consumer genetic testing.” https://irihs.ihs.ac.at/id/eprint/5817/1/Lang-Winkler-2021-co-constructing-ancestry-through-direct-to-consumer-genetic-testing.pdf.
- Montagu, MF Ashely. “THE CONCEPT OF RACE IN THE HUMAN SPECIES IN THE LIGHT OF GENETICS.” Journal of Heredity, Journal of Heredity, https://academic.oup.com/jhered/article-abstract/32/8/243/817951.
- Oh, Jeongmin, and Uichin Lee. “Exploring UX issues in Quantified Self technologies.” IEEE, https://ieeexplore.ieee.org/document/7061028.
- Parthasarathy, Shobita. “Assessing the social impact of direct-to-consumer genetic testing: Understanding socio-technical architectures.” Genetics in Medicine, vol. 12, 2010, pp. 544–547, https://www.nature.com/articles/gim201090.
- Prainsack, Barbara. “Understanding Participation: The ‘Citizen Science’ of Genetics | 17 |.” Taylor & Francis eBooks, 2014, https://www.taylorfrancis.com/chapters/edit/10.4324/9781315584300-17/understanding-participation-citizen-science-genetics-barbara-prainsack.
- Templeton, Alan R. “Biological Races in Humans – PMC.” NCBI, 16 May 2013, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3737365/.
- Roth, Wendy D., et al. “Do genetic ancestry tests increase racial essentialism? Findings from a randomized controlled trial.” Edited by Mellissa H. WithersDo genetic ancestry tests increase racial essentialism? Findings from a randomized controlled trial. PLoS One, 2020, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6988910/.
- Swan, Melanie. “The Quantified Self: Fundamental Disruption in Big Data Science and Biological Discovery.” Big Data, vol. 1, no. 2, 2013, https://www.liebertpub.com/doi/10.1089/big.2012.0002.
Miscellaneous:
- “The 23andMe Ancestry Algorithm Gets an Upgrade.” WP Engine, https://blog.23andme.com/articles/algorithm-gets-an-upgrade.
- “Ancestry Composition.” 23andMe, https://www.23andme.com/ancestry-composition-guide/
- “Understanding The Difference Between Your Ancestry Percentages And Your Genetic Groups.” 23andMe Customer Care, https://customercare.23andme.com/hc/en-us/articles/5328923468183-Understanding-The-Difference-Between-Your-Ancestry-Percentages-And-Your-Genetic-Groups.
Sex on a spectrum: biological perspectives of intersexuality and transexuality
By Vishwanath Prathikanti, Anthropology ’23
Author’s note: This past quarter I took ANT158, Evolution of Sex: A Biological Perspective. I had falsely believed prior that most of our understanding of sex and sexuality was from a psychological perspective resulting from differences in hormonal cascades that occurred before birth. It was enlightening to learn about evolutionary theories behind sexuality, the relatively high frequency of intersexed individuals, and how different cultures are shaped because of it. For this paper, I wanted to focus on two groups I was previously unaware had so many biological basises; intersexed individuals and trans individuals. I hope to help someone correct misconceptions such as the fallacy that according to biology, there are only two genders.
Many understand the difference in sexes as a difference in gonads, or reproductive parts. Males have testes and females have ovaries, with the growth of each determined by hormonal cascades. However, many do not understand that even sex exists on a spectrum; if a gene is not expressed or a hormone is not released, there may be a mismatch between the genetic code and the expression of that code for the individual. Such individuals fit the intersexed definition, though an exact definition of what an intersexed individual is has been a subject of controversy in the scientific community.
Perhaps one of the most important contributors to the acceptance of intersex individuals as something more than fringe cases was Dr. Anne Fausto-Sterling, who published a number of books and papers on intersex individuals. In a literature review summarizing research from the 1950’s to 2000, Fausto-Sterling and colleagues first presented the notion that the percentage of intersex individuals in the population may be as high as 2% [1]. In this paper, they defined intersex as any individual that deviates from the idea that there are only two sexes via a wide variety of biomarkers. These deviations can present themselves in chromosomal, gonadal, or hormonal levels in individuals. In other words, the key difference between transsexuals and intersexed individuals is that intersexed individuals always have some kind of observable biological element, and there are a wide array of markers. Transsexuals simply have to identify as something besides their gender assigned at birth, and some may have biological markers associated with the opposite sex and others may not. A famous example of a biological marker in transsexuals, the BSTc region of the brain, is discussed further.
In her 2012 book, Sex/Gender: Biology in a Social World, Fausto-Sterling expands more towards brain-sex and the potential mismatch between physical characteristics and their gender identity. A misunderstanding of what brain-sex is may be a contributing factor to the perpetuation of gender being the only thing on a spectrum. Brain-sex refers to the complicated ways in which hormones, gene expression and genetic imprinting by the father and/or mother affects the child and the way their brain works. It is not limited to how a person perceives their sex, and this myth may contribute to the idea that gender and sex are completely different and one (sex) refers to biology and the other (gender) to the brain’s perception of identity. In reality, both are linked to biology [2].
Despite these efforts, sex existing on a spectrum is still challenged. In 2002 for example, Leonard Sax posited that the rate of intersexed individuals was much lower—around 0.018% [3]. However, Sax came to this number through a strangely strict definition of intersex; Sax posits that if an individual had an XXY chromosome and had some cells with XX and others with XY configuration, this person would not be intersexed, as their cells technically match their chromosomes. To be intersexed according to Sax, someone must have a mismatch between phenotypic sex and genotypic sex. For example, a person under Sax’s definition would be intersex if they had Complete Androgen Insensitivity Syndrome, if they had XY chromosomes but they never developed male genitalia due to a defect in androgen receptors [3].
Such a definition, however, is comparatively much less valuable than what Fausto-Sterling tells us. Her definition of intersex is simply being somewhere in between a man and a woman, and her definition seeks to dismantle the myth that there are only two sexes. While detractors claim that they seek a more clinically rigorous definition, like Sax, it is possible that it furthers the myth that intersex individuals are simply outliers in society, and sex exists in a binary system.
Similarly, if we understand that we can exist on a sexual spectrum, it becomes understandable why transgender individuals, or people that want to switch the gender imposed on them due to their genitalia, exist in society. In addition to the research conducted by Fausto-Sterling indicating a disconnect between external genitalia and internal genes and hormones, there is other clear evidence showing a biological basis for transgendered individuals. Specifically, a study conducted by Zhou and colleagues examined the volume of the central subdivision of the bed nucleus of the stria terminalis (BSTc), a white matter band that acts as a relay site during a stress response. The BSTc is essential for sexual behavior due to multiple reasons, but perhaps most importantly, it is the major center of the aromatization process essential in converting testosterone to estrogen, the two most common male and female growth hormones [4]. It also forms unique connections with the amygdala and hypothalamus, making it highly influential in growth and development. Uniquely, the BSTc region is much larger in males than females, and is directly related to the testosterone and/or estrogen it helps create and regulate. Zhou and colleagues found that the BSTc region among transgendered women and non-transgendered women, where neither groups were on any kind of hormonal therapy that would affect the size of their BSTc, were similar in size. This female brain structure in a genetically male individual supports the notion that gender identity develops as a result of the developing brain [4].
While the research base for transgendered and intersexed individuals is very strong, cultural pushback is rooted in either misinformation or a sense of feeling threatened transgendered individuals. One prominent example of this can be seen in their participation in sports. Over the years, the regulation of trans peoples’ participation in sports has led to absurd levels of regulation, notably for people that do not identify as trans. Most famously are the numerous cases against Caster Semenya, multi-gold winning olympic track athlete, on the basis of testosterone, a shaky metric. The International Association of Athletics Federations (IAAF) currently states that to compete in the olympics, a woman should have testosterone levels below 5 nmol/L. Otherwise, they must compete as a male or receive testosterone blockers to compete as a female. However, in a review of nearly 700 elite athletes, Healy and colleagues found that 16.5% of men had testosterone levels below the 5 nmol/L limit and 13.7% women had testosterone levels above the limit [5]. The IAAF conducted their own study that upheld their regulations, but importantly, they opted to exclude outliers that they deemed having “differences of sexual development,” something they have been criticized for but have not rectified as of the publishing of this paper [6]. These discriminatory practices perhaps further fuel ignorance on the subject of intersexed individuals, and do not properly tackle biology in sexuality.
The reality is that human beings are more complicated than we’d like to admit. “Bodies are not bounded,” Fausto-Sterling emphasizes in the conclusion to her book. “We will learn a lot about the science of sex and gender in the years to come. But to the extent that our social settings and thus experiences change, at least some of the subtleties of sex and gender will remain a moving target” [2].
REFERENCES
- Blackless, M., Charuvastra, A., Derryck, A., Fausto-Sterling, A., Lauzanne, K., & Lee, E. (2000). How sexually dimorphic are we? Review and synthesis. American Journal of Human Biology, 12(2), 151–166. https://doi.org/10.1002/(SICI)1520-6300(200003/04)12:2<151::AID-AJHB1>3.0.CO;2-F
- Fausto-Sterling, A. (2012). Sex/Gender: Biology in a Social World. Routledge. https://doi.org/10.4324/978020312797
- Sax, L. (2002). How common is intersex? A response to Anne Fausto-Sterling. Journal of Sex Research, 39(3), 174–178. https://doi.org/10.1080/00224490209552139
- Zhou, J.-N., Hofman, M. A., Gooren, L. J. G., & Swaab, D. F. (1995). A sex difference in the human brain and its relation to transsexuality. Nature, 378(6552), Article 6552. https://doi.org/10.1038/378068a0
- Healy, M. L., Gibney, J., Pentecost, C., Wheeler, M. J., & Sonksen, P. H. (2014). Endocrine profiles in 693 elite athletes in the postcompetition setting. Clinical Endocrinology, 81(2), 294–305. https://doi.org/10.1111/cen.12445
- Pielke Sr, R., Tucker, R., & Boye, E. (2019). Scientific integrity and the IAAF testosterone regulations. The International Sports Law Journal, 19. https://doi.org/10.1007/s40318-019-00143-w