Home » News (Page 2)

Category Archives: News

Want to Get Involved In Research?

[su_heading size="15" margin="0"]The BioInnovation Group is an undergraduate-run research organization aimed at increasing undergraduate access to research opportunities. We have many programs ranging from research project teams to skills training (BIG-RT) and Journal Club.

If you are an undergraduate interested in gaining research experience and skills training, check out our website (https://bigucd.com/) to see what programs and opportunities we have to offer. In order to stay up to date on our events and offerings, you can sign up for our newsletter. We look forward to having you join us![/su_heading]

Newest Posts

An Evaluation of eDNA Sampling for Aquatic Species

By Isoline Donohue, Biological Sciences, ’23

Author’s Note: I wrote this literature review for UWP 102B during the spring quarter of 2022, and learned about the Aggie Transcript from that course. I chose to write about this topic because I am very interested in conservation biology and work as an undergraduate researcher in this field. I had heard of environmental DNA before at my lab, but wanted to learn more by doing my own research. From this piece, I hope readers learn about the new and exciting ways species monitoring is being done to preserve ecosystems.

 

Introduction

Marine species are experiencing higher population declines than many terrestrial species due to anthropogenic causes [1], such as increased water exports or runoff impacting habitats and behavioral patterns. Aquatic systems require a greater focus on species preservation, but keeping track of different species can be difficult. A first step in conservation involves genetic monitoring to track population decline. Genetic monitoring uses DNA to study variation within a species, as well as to discriminate between different species types in an environment. Monitoring a species may include the use of environmental DNA (eDNA) in lieu of DNA collected off of the organism itself. eDNA is a sample collected from a habitat, such as water from a stream, that contains the DNA of the species present. The skin, mucus, or hair of an organism when shed contains DNA that can be eluted and amplified to detect species [2]. Samples are then used for metabarcoding, or non-species specific detection, as well as for targeting a certain species of interest through assay development of different genetic markers [2]. Overall, this method of sample collection is both non-invasive and applicable for smaller populations when capture methods tend to lose reliability [1]. Traditional capture methods are not always successful in areas where a species may currently or recently inhabit, which is where eDNA can be used to discover new territories and monitor current ones. 

Studies have been conducted with environmental samples to monitor endangered species, as well as non-native species that endanger other populations [1, 3]. eDNA detections often need validation through replicates or prior knowledge of inhabitants, as using eDNA is still a relatively new form of sample collection. Schmelzle et al. (2016) highlights how current eDNA research centers on standardizing DNA capture methods to ensure repeatability [4]. Therefore, studies are being conducted to compare eDNA to traditional capture methods to determine if positive detections can be made the same as if a tissue sample was taken from an organism. 

Currently, there is concern over whether eDNA can be reliably used for instances where seasonality impacts species presence in an area [5], as well as instances where DNA may degrade before proper evaluation [6]. In order to understand how eDNA can be utilized to find the limitations of aquatic species detection, various studies have been compiled. In this review, we evaluate the current methodology, validity, effectiveness, and concerns of eDNA through studies centered on endangered or invasive aquatic species. 

Methodology 

While collection, extraction and analysis methods and materials are not always the same, the general process of going from an environmental sample to an identifiable DNA sequence of a species is. First, the water collected for any aquatic system sampling must be filtered. There are several methods to do this that involve different materials and containers to collect and filter water. Ratcliffe et al. (2020), for example, took water samples from the Irish and Celtic seas to detect key taxa in the area. The researchers strained the water using a syringe and filters, where the water was stored in the filter holder containers at -20°C until DNA extraction [7]. Alternatively, Boothroyd et al. (2016) filtered their water samples with a funnel and vacuum right before DNA extraction to prevent degradation. These samples are generally stored with ethanol at -20°C before DNA extraction and -80°C afterwards [1, 3, 4, 5, 6, 7, 8]. Franklin et al. (2018) and Dubreuil et al. (2021) used the Qiagen DNeasy Blood and Tissue Kit in a two day extraction process of lysis for DNA release, and subsequent washing/elution to isolate the DNA. Afterwards, qPCRs in each study were run against an assay containing the genetic marker for identification of a specific species of interest, or in some cases a range of taxa. Species and their relative abundances have the potential to be identified via DNA sequencing starting with an environmental sample.

Researchers take different approaches in validating positive detections, such as through the use of controls and assessment of assays. For example, Mauvisseau et al. (2019) detected pearl mussels in Lake Windermere. The researchers validated their assays targeting the COI and 16S genes, regions optimal for species identification, with statistical analysis on their level of detection and quantification before using them against the samples collected in the study. The eDNA detections were also tested against tissue samples and positive controls to ensure accuracy. This was a double-blind experiment during the water sample collection and filtering process, where sample site information was not revealed to the researchers until after analysis was done [2]. Dubreuil et al. (2021) cited Mauvisseau et al. (2019) as the researchers followed the same statistical guidelines, and their assays proved to be specific to only their species of interest via the positive detections observed. Lastly, Franklin et al. (2018) also took steps to evaluate an appropriate assay for their species. They found that the COI genetic marker also distinguished their species of interest, smallmouth bass, from other non-target species. They examined COI sequences of bass from different regions to obtain sequence specific primers. The developed assay was tested against database sequences, tissue samples, and finally eDNA itself [9]. These preliminary tests aid in ensuring accurate detections are being made when the assay is used against eDNA samples.

Additionally, repeatability in sample analysis can be used to rule out contamination and strengthen confidence in positive detections. Schmelzle et al. (2016) demonstrated this by testing each water sample with 6 replicate qPCRs alongside positive controls to detect tidewater gobies on the California coast. qPCRs are done to identify and quantify the DNA in the samples collected, while positive controls confirm the accuracy of target detections. Overall, research tends to center on validating eDNA detections, as variability can be high even within the same sampling region.

Genetic information from numerous species may be present in one water sample. Researchers may choose to target a specific species or to identify all of the species in the sample using DNA sequencing methods. National Park Service

 

Comparison to traditional methods 

A non-invasive method 

A benefit of eDNA sample collection is that it is noninvasive to an ecosystem. This is most beneficial for detecting low density species when capture methods tend to fail. Boothroyd et al. (2016) evaluated the effectiveness of eDNA in relation to traditional capture methods, specifically regarding spotted gar fish that were collected via netting. The downsides of netting include sample size limitations and disruption of habitats. Boothroyd et al. (2016) placed fyke nets (cylindrical fish traps) at the eDNA sample sites for 24 hours during the spotted gar spawning season to gather a representative sample. Fin clips from 12 reference samples of spotted gar captured were used for DNA extraction and downstream analysis to compare to the eDNA samples [1]. eDNA detections, made from 1 liter water samples, were consistent with areas spotted gar were previously known to inhabit, and were even made at sites where capture methods failed to pick up the species of interest when placed [1]. The researchers in this study also captured dozens of other fish species at the different bodies of water sampled while attempting to collect spotted gar. The variability of netting shows that it can cause disruption for more than just the species of interest, as well as limit the sample size. 

Similarly, Schmelzle et al. (2016) used occupancy modeling to compare traditional capture methods against eDNA detection. This study noted the limitations of traditional capture methods in their low capture yield versus high cost and time requirement. A seine haul (vertical net placed in the water) of tidewater goby fish was taken at each sample location to compare to the water samples collected. It was found that eDNA detections from the water samples, validated against known goby presence and replicate qPCRs, were more effective than seining detections in accurately representing goby occupancy [4]. Schmelzle et al. (2016) concluded that eDNA has the potential to be the dominant method for tidewater goby tracking [4]. In another study, Dubreuil et al. (2021) set up baited fish traps for armored catfish at sample sites three times each for 16 hours. eDNA of the species of interest was detected in the water sampled at 18 sites. Alternatively, the fish traps detected catfish at 14 of the 18 sites where positive eDNA detections were made [8]. There were no captures made at sites eDNA did not detect. This study highlights that trapping is variable, as it can depend upon predation, breeding, and food sources [8]. eDNA may be subject to similar variability, however, due to its sensitivity even a species low in numbers can be detected. While trapping non-native species is not as large of a concern in terms of invasiveness, the possibility of capturing at risk species in the area can be avoided with eDNA. 

Reaching new limits with low density species detection 

eDNA has proven to detect species in a wide range of sampling locations, even where inhabitantance has not previously been verified with traditional capture methods [1, 4]. This is in part because traditional capture methods become all the more difficult as a species population size declines. de Souza et al. (2016) monitored the black warrior waterdog salamander and flattened musk turtle, two at risk species from the upper Black Warrior River basin of Alabama that are impacted by habitat degradation, using eDNA sampling. The low payoff of laborious methods such as dip netting, trapping, and electrofishing for these species makes eDNA a better alternative. The researchers highlighted how sampling with eDNA is most effective when used alongside knowledge of a species’s territorial range and migration patterns, which is also true for trapping/fishing except that the latter takes more time to gather the necessary sample size [5]. eDNA also has the sensitivity to detect low density species, colonization events, and target species against similar ones [1, 3, 4, 9]. For instance, Mauvisseau et al. (2019) found that eDNA was able to differentiate between freshwater pearl mussels and non-target species with two different assays used against other mussel species. Meanwhile, Franklin et al. (2018) had success in identifying smallmouth bass, even when simulations predicted the genetic marker of choice would amplify additional species from the eDNA sample. eDNA allows researchers to determine the presence of a species with low population numbers in order to increase the regions being targeted for conservation. 

Detecting non-native species 

Non-native species are found either at a low density due to recent colonization, or a high density due to successful adaptation. Both instances need to be monitored to assess plans of action for restoring ecosystem balance [8]. In a notable study, Franklin et al. (2018) detected smallmouth bass to maintain the population of pacific salmon and other native species of the Pacific Northwest. While the spread of smallmouth bass in the US has had economic benefits, close population management is required for habitat stability [9]. Smallmouth bass have been documented to consume 35% of a salmon run, which has negative consequences on salmon migration and local predator/prey interactions. eDNA, with its high sensitivity, was able to detect smallmouth bass successfully against non-target species to find sample locations that require conservation targeting [9]. Similarly, Dubreuil et al. (2021) also tested eDNA by tracking armored catfish who recently started to inhabit rivers in Martinique and compete with gobies for food sources. 22% of sample sites detected the aquatic invasive species (AIS), armored catfish, using water collections. This AIS did not appear to have habitat preferences, such as pH, oxygen level, or temperature. Therefore, this species is more likely to successfully adapt to new territories [8], hence the need for species monitoring.

Current limitations 

Despite the promise that eDNA shows, there are potential concerns over its consistency and longevity. For example, de Souza et al. (2016) evaluated the effect of species’ seasonality on eDNA detection probabilities. Seasonality differences may limit the time frame eDNA can effectively be utilized for and introduce a sampling bias. The study discovered that warm and cool seasons played a significant role in the detection of their species of interest. Investigating a species’s migratory and spawning behavior can improve the reliability of eDNA in the same way that traditional sampling methods can be improved [5]. 

Detection results can also be impacted by eDNA degradation, as validating a species’s presence depends on whether there is recent DNA in the surrounding area [6]. Barnes et al. (2014) found that freshwater amphibian eDNA lasted for over two weeks, while marine fish eDNA only lasted for seven days. There are many possible influences for eDNA degradation, such as sunlight or pH [6], so it is important to consider how genetic degradation may impact detections, or lack of. Brys et al. (2020) also reported eDNA degradation rates of close to one week for the 7 fish and 2 amphibian species the researchers were sampling as their control, and cited Barnes et al. (2014) in how high temperatures, UV radiation etc. may be to blame. 

eDNA degradation is also accompanied with dispersion, where detections can only be deemed as reliable when they are repeatable in the area. Boothroyd et al. (2016) observed replicate water samples and determined there was variation among the number of positive detections, despite eDNA being highly sensitive to amplification. The study mentions that a downside of eDNA is that actual organisms are not being detected, so DNA could possibly be due to run-off or the remains of a fish [1]. This implies that DNA detections can vary, which may make eDNA collections a better supplement rather than replacement to traditional capture methods. 

The type of aquatic ecosystem and species present can influence detection probabilities in many cases as well. Brys et al. (2020) observed a lentic (standing water) system with high eDNA decay rates and distance limitations for DNA retrieval. Metabarcoding was used to detect various species in a pond, where natural water samples were compared to those taken in proximity to a known variety of locally caged species. The 12S genetic marker detected the known fish and amphibians in the area via DNA sequencing in metabarcoding. Positive detections depend on if a lentic or lotic (rushing water) system is being observed, where the flow of water in a lotic system increases the range of eDNA [3]. Species type and density were shown to impact eDNA dispersal rates as well in this case [3], meaning samples taken in proximity to one another could potentially show different detection results due to spatial limitations. 

Conclusion 

The goal of this review was to demonstrate how eDNA collection and analysis works, and how it can be useful for aquatic species conservation. Various factors, such as the behavior of a species, population density, and sample regions determine whether eDNA can outperform traditional sampling methods. Methodology is often altered to correspond with these factors, as the different species being monitored and types of aquatic systems being observed impact everything from sampling to analyzing. This implies that whether eDNA can replace traditional capture methods will depend on the study itself and what information researchers are seeking. Dubreuil et al. (2021) had greater success in their study using eDNA compared to trapping, noting that eDNA has the potential to detect species at a greater distance than a trap does. Future research should be aimed towards identifying the susceptibility of eDNA degradation and false detections in different bodies of water. The more eDNA is understood, the simpler it becomes to validate findings and lessen reliance on traditional capture methods.

 

References:

  1. Boothroyd M, Mandrak NE, Fox M, Wilson CC. 2016. Environmental DNA (eDNA) detection and habitat occupancy of threatened spotted gar (Lepisosteus oculatus). Aquatic Conserv: Mar. Freshw. Ecosyst. 26: 1107–1119.
  2. Mauvisseau Q, Burian A, Gibson C, Byrs R, Ramsey A, Sweet M. 2019. Influence of accuracy, repeatability, and detection probability in the reliability of species-specific eDNA based approaches. Scientific Reports. 9:580.
  3. Brys R, Haegeman A, Halfmaerten D, Neyrinck S, Staelens A, Auwerx J, Ruttink T. 2020. Monitoring of spatiotemporal occupancy patterns of fish and amphibian species in a lentic aquatic system using environmental DNA. Molecular Ecology. 30:3097–3110.
  4. Schmelzle MC, Kinziger AP. 2016. Using occupancy modeling to compare environmental DNA to traditional field methods for regional-scale monitoring of an endangered aquatic species. Molecular Ecology Resources. 16, 895-908.
  5. de Souza LS, Godwin JC, Renshaw MA, Larson E. 2016. Environmental DNA (eDNA) detection probability is influenced by seasonal activity of organisms. PLOS ONE.11(10): e0165273.
  6. Barnes MA, Turner CR, Jerde CL, Renshaw MA, Chadderton WL, Lodge DM. 2014. Environmental Conditions Influence eDNA Persistence in Aquatic Systems. Environmental Science & Technology. 48, 1819−1827.
  7. Ratcliffe FC, Uren Webster TM, de Leaniz CG. 2020. A drop in the ocean: Monitoring fish communities in spawning areas using environmental DNA. Environmental DNA. 3:43-54.
  8. Dubreuil T, Baudry T, Mauvisseau Q, Arqué A, Courty C, Delaunay C, Sweet M, Grandjean F. 2021. The development of early monitoring tools to detect aquatic invasive species: eDNA assay development and the case of the armored catfish Hypostomus robinii. Environmental DNA. 4:349–362.
  9. Franklin TW, Dysthe JC, Rubenson ES, Carim KJ, Olden JD. 2018. A non-invasive sampling method for detecting nonnative smallmouth bass (micropterus dolomieu). Northwest Science. 92(2):149-157.

A New Titan Among Bacteria

By Ethan Feild, Molecular and Medical Microbiology

Author’s Note: I have always been interested in “huge” single celled organisms, like the amoebas living at the bottom of the ocean or even slime molds. When I heard about a single bacteria cell reaching 2 centimeters, I almost couldn’t believe it. This article was originally written for my upper division writing course, to translate specialized knowledge for those who do not have the same experience with a topic that we might. I wrote about T. magnifica because I genuinely believe it to be a deeply interesting and amazing discovery, and I want to convey what makes this new species so interesting to people who don’t have a deep understanding of microbiology. I just had to know how it could grow that big. And now that I’ve learned more about it, I want to teach others that to reach this size, the cell has to break a lot of preconceptions we have about cellular limitations.

 

What are some of the largest living things you can think of? An elephant? A blue whale? What about the giant sequoias or towering redwoods? There are a lot of “biggest” organisms in the world, but would you ever expect a bacterium to claim that title? Just recently, a new king of giant bacteria was crowned.

Candidatus Thiomargarita magnifica, a bacteria whose cells can stretch up to an impressive two centimeters in length, was just discovered this February through joint research at Lawrence Berkeley National Laboratory and France’s Muséum National d’Histoire Naturelle [1]. 

“Only two centimeters? That’s as big as a penny!” You may think to yourself. And you’d be right, but this defies all expectations about how big bacteria can grow. Take a more common bacteria like E. coli, for example. They are the “gold standard” for many scientists, especially for bacterial size and morphology. A single cell averages only one micrometer in length (one thousand micrometers fit in a millimeter). By comparison, T. magnifica looks massive, reaching sizes over 20,000 times larger with just a single cell. If E. coli were as big as the average person, T. magnifica would tower over us at 100,000 feet (19 miles) tall, which is high enough to reach supersonic plane altitudes in the stratosphere. But if most bacteria are microscopic, what advantage is there to growing so massive?

Relative to a human-sized E. coli cell, a single Thiomargarita magnifica cell would be over twice as tall as Mount Everest.

 

Their massive size is a survival strategy for life in the outside environment. Bacteria like E. coli and Salmonella, are small because they are pathogens. When you mostly live inside another organism, staying small means you can reproduce faster, spread easier, avoid capture, and gather all the rich nutrients in your host. Yet outside our bodies live entire microbial ecosystems with members that have to depend on themselves rather than a gracious host for everything, like T. magnifica.

The Thiomargarita genus, who despite the name have nothing to do with margaritas, are part of a larger group called “sulfur bacteria” that “eat” sulfur to get their energy. These bacteria like to live in the mud of streams, ponds, and estuaries where sulfur levels are high and oxygen levels are just right for survival. Unlike other more active bacteria, Thiomargarita do not move around very much. Once they stick somewhere, they live their entire lives on that spot. By growing bigger, they can better handle any potential changes that affect the concentrations of their energy source without needing to relocate from a good spot [2]. This is a sound strategy used by many members of Thiomargarita, including the former size champion Thiomargarita namibiensis which only grows to 750 micrometers in diameter.

But what does T. magnifica do differently to grow so big? Think of a simple bacterial cell like a balloon filled with water, glitter, and a piece of string. The rubber of the balloon is the membrane, the water is the cytoplasm that fills the cell, the glitter floating around inside is the different materials the cell needs to stay alive, and the string is the DNA, the blueprint of the cell. For our balloon cell to stay alive, it needs to move its parts around fast enough to fix damage, take in food, and excrete waste. When the balloon is small those parts can move around easily enough on their own; just by random chance, the pieces of glitter will get where they need to be, and our stringy DNA is more than enough to call all the shots.

As the balloon gets bigger, it takes longer for everything to cross the distance, and the DNA can’t make it to where it is needed. The surface area of our balloon has increased, but the volume inside grows even faster. This is the major problem with diffusion, which many bacteria rely on for their survival [3]. At microscopic sizes it takes less than a second for a molecule to diffuse its way through the cytoplasm. However, it takes oxygen over an hour to diffuse across even a millimeter [4]. And larger, heavier molecules, like life-sustaining sugars, move even slower. Growing too big means there’s too much space to cover, and diffusion alone cannot meet the bacterial cell’s needs to survive.

But diffusion is all that bacteria really have. Bacteria belong to a group called the prokaryotes, whose cells appear relatively simple when compared to our own. The very definition of a prokaryote is a lack of membrane-bound organelles (tiny organs within the cell). This is why most bacteria stick to such small sizes; they physically can’t afford to grow too big without starving themselves.

Yet, some bacteria, like T. magnifica, are determined to prove both physics and biology wrong. Along with other members of the genus, T. magnifica fills most of its body with a large vacuole in order to effectively decrease the volume that needs diffusion. It’s like filling the water balloon with another large balloon, so that everything is pressed into the thin layer between. The cytoplasm is actually only around 3 micrometers thick the whole way around the cell, with the vacuole taking up 73% of its total volume [1]. So even though the cell may be very large, a lot of the important processes take place in the very periphery of the outer membrane in that very thin layer.

That thin layer may hold all the cellular machinery, but it wouldn’t make sense to waste all that extra space, either. So the vacuole also doubles as a storehouse for the various chemicals that the bacteria might need to keep away from their proteins or are simply harder to come across. Important compounds like nitrates can be stored inside and kept at concentrations much higher than the surrounding environment whilst protecting other parts of the cell from harmful side reactions [2]. Recent investigations even indicate that they could use this extra membrane as a way to produce energy, much like how our own cells use mitochondrial membranes to produce most of our energy. This multipurpose structure is one of the reasons why T. magnifica has been able to reach such staggering lengths. It also proves that bacteria are more than the simple balloons of chemicals that we might make them out to be.

Another difficulty with large cell size relates to the cell’s DNA. With smaller cells, it’s easy enough to read off of a single copy for every protein it needs. But in a bigger cell, even with a thin cytoplasm, proteins would take too long to reach their destinations. And this is where T. magnifica really stands out from the rest. Where most bacteria have one copy of their DNA, T. magnifica has many, a situation known as polyploidy.

Polyploidy itself is hardly new. Even we have two copies of each of our 23 chromosomes, and some plants can have dozens of them. The other Thiomargaritas also have multiple copies of their chromosome spread around the cell, but T. magnifica stands out for having over 737,000 copies in a fully grown cell [1]. With one of the highest known numbers of DNA copies, T. magnifica can spread them throughout the body and maintain its impressive size.

Having the most DNA copies is not enough, however. One more unusual trait about T. magnifica is how it stores its DNA. Bacteria usually let their chromosome float around in the cytoplasm, accessed whenever it’s needed. But this giant actually keeps its DNA stored within specialized compartments that its discoverers have adorably named a “Pepin” in analogy with the pips (small seeds) in a watermelon [1]. And the analogy isn’t too far off.



A closeup of T. magnifica’s insides. See how the cytoplasm is only a small part of the volume of the cell, with pepins scattered throughout to organize the DNA and protein factories.

 

These little pepins keep the DNA more compact and organized, preventing messy diffusion throughout the whole of the cell body. They even discovered that ribosomes, the cellular machinery that turns genetic code into proteins, are also housed within these little organelles. Some eukaryotes have multiple nuclei in their cells, but in bacteria, such a compartment is almost unheard of. The defining characteristic of prokaryotes is the very lack of a nucleus, a membrane compartment where DNA is stored, so what about these pepins? While not exactly the same in structure, it raises questions about just how complex bacteria can be, and where our own nuclei came from.

It is clear that without specialization, it is impossible for a single cell to grow so large. Our own cells, and the cells of every animal, plant, and fungus, house DNA within a nucleus. These pepins within T. magnifica play a similar role in holding DNA all around the cell’s body, which no other bacteria are known to do. It even uses a vacuole to lower its volume and potentially store nutrients like an organelle. If prokaryotes are not supposed to have such structures, then what does that mean for bacteria like T. magnifica? Do all prokaryotes have to be simple?

The more we learn from T. magnifica the more it defies our conventional definition of prokaryotic life. Further study of this amazing oddity could shed light on where eukaryotes like us came from, what the real limits of a cell’s size are, and if bacteria are as simple as we think. Or maybe this is just the tip of the giant bacteria iceberg. Maybe an even larger giant is still waiting to be found, ready to defy our expectations all over again.

 

References:

  1. Volland, J.-M., et al. (2022). A centimeter-long bacterium with DNA compartmentalized in membrane-bound organelles. BioRxiv.
  2. Salman, V., Bailey, J. V., & Teske, A. (2013). Phylogenetic and morphologic complexity of giant sulphur bacteria. A Van Leeuw J Microb. 104(2):169–186.
  3. Marshall, W. F., et al. (2012). What determines cell size? BMC Biol. 10(1).
  4. Levin, P. A., & Angert, E. R. (2015). Small but mighty: Cell size and bacteria. CSH Perspect Biol. 7(7).

The Long-term Effects and Implications of Testicular Cancer Treatment

By Michael Guo, Molecular and Medical Microbiology ‘25

Author’s Note: After I returned home for winter break last year, I learned that a friend from my high school was diagnosed with testicular cancer. While I have limited experience with cancer and pathology, I hoped to educate myself about a topic that impacts and will continue to impact the life of one of my friends, and improve my medical literacy as well. This review is primarily based on discussing increased risk factors from testicular cancer and treatment, focusing on resulting secondary malignant neoplasms and cardiovascular disease. 

 

Introduction:

In the past century, the two most prominent causes of death have been heart disease and cancer [1]. Heart disease disproportionately affects older adults, and cancer typically follows a similar pattern. One exception to this is testicular cancer, which in contrast to most types of cancer, occurs most often in 25-45 year old males [2, 3]. Another defining feature of testicular cancer is the extremely high survival rate in most patients, with most cases hovering around 95%. While this high survival rate is admirable, testicular cancer survivors have an increased risk for long-term effects and cancer recurrence from treatment compared to other cancers. Testicular cancer treatment greatly increases long-term effects from treatment compared to other cancer types treatments because survivors are often younger; the testes themselves are relatively exposed organs compared to the heart, lungs, etc, which makes cancer lumps more easily detectable, but also suggests younger survivors could live with long-term effects for multiple decades [4]. 

Caption: A study [10] depicts data collected on ages of TC patients, with a majority of survivors comprising the 25-45 year old age group.

Testicular cancer (TC), in its most simplified definition, is the uncontrolled division of cells (cancerous) in testicle tissues of males. Most treatments of TC consist of surgical removal of cancerous tissues, radiation therapy and chemotherapy, with the latter two encompassing more adverse effects. One such effect is the development of secondary malignant neoplasms (SMN), which are new cancerous cells that occur because of previous radiation therapy and/or chemotherapy [5]. 

The chemical drugs used in chemotherapy can also lead to various issues, including infertility, low testosterone, and various heart complications/diseases [6]. By deepening our understanding of the exact connections between TC treatments and their long-term effects, healthcare workers can greatly decrease mortality and improve the quality of life of testicular cancer survivors. 

Testicular Cancer:

In order to distinguish when various TC treatments are utilized, there must be an understanding of the many forms and stages of TC. Testicular cancer almost always consists of germ cell tumors, which are cancerous cells that form from germ cells, or the sex cells (sperm in males). Non-germ cell tumor TCs are called stromal tumors; these are cancerous cells formed from supportive tissue in the testes, are relatively rare and can almost always be locally removed with surgery. There are two main types of TC germ cell tumors: seminomas and nonseminomas. Seminomas are localized in the testes and can be treated with surgery and subsequent radiation therapy and/or chemotherapy. Non-seminomas usually have spread throughout the body and require more intense chemotherapy, with subsequent surgery when needed depending on the stage/observations [7, 8]. These different forms of TC require different treatments; oftentimes patients receive a combination of multiple kinds. Most TC cases consist of germ cell tumors, have to be treated with some sort of radiotherapy/chemotherapy, and enclose many adverse, long-term side effects. 

One of the most important designations in a cancer diagnosis is the stage. Cancerous tumors are categorized into five stages, with increasing severity from 0-4. Both Stage 0 and Stage 1indicate cancerous cells in one specific area, and are considered early stage cancers. Stage II and III are used when cancerous cells have expanded to surrounding tissues or the lymph nodes. Cancer cells that have spread are most commonly first found in lymph nodes, where developed cancerous cells can congregate from tissue circulation of the lymphatic system and spread to other organs. When cancerous cells spread past the original organ and lymph nodes, it is then classified as Stage IV, and is considered the most advanced stage of cancer [9]. 

Caption: Graphic showing the progression of cancer cell growth within a group of cells in the five cancer stages [7]. 

Statistically, most prognoses for TC patients are encouraging. A committee of German cancer statisticians observed that TCs only make up around 1.6% of cancers in men, and 90% of TC cases are diagnosed Stage I or II. On the other hand, TC also can also be categorized by various forms. Stromal or unknown non-germ cell tumors make up around 7% of all TC, while seminomas make up around 62%, and various non-seminomas around 31% [10]. Combined, the high diagnosis rate within early stages and the fact that the majority of cancer cases are localized result in close to a 95% 5-year survival rate, meaning a high number of patients survive for at least 5 years after diagnosis. Not to be overlooked however, is the impact of newly developed treatments of TC in the last 30 years. 

Treatment and Side Effects:

Most of the adverse effects experienced from testicular cancer survivors are derived from the harsh treatment options available, rather than the cancer itself. So far, attempting to balance long-term effects from medication with sufficient treatment to remove the cancer cells has been proven to be the most successful course of medical care. All forms of TC can utilize surgical removal of tumors as a treatment option. When diagnosed early, surgery is almost always used to remove a testicle to prevent cancerous tissues from spreading. Surgical removal has little to no long-term side effects outside of normal surgical recovery standards. 

More commonly seen with non-seminomas or more advanced stage TCs (stage III/stage IV) is the use of chemotherapy and radiation therapy. While chemotherapy uses specific drugs or drug combinations to target and kill rapidly dividing cells, radiation therapy uses focused radiation to break apart cancer cell DNA, which prevents division [8]. However, chemotherapy and radiation therapy come with different side effects. In chemotherapy of TC specifically, almost all treatments contain bleomycin, etoposide and cisplatin. This combination causes especially harsh long term side effects, including infertility, low testosterone, heart diseases and development of secondary cancers [2, 11]. In recent statistical studies tracking TC survivors who have undergone chemotherapy or radiation therapy (or a combination of both), researchers have noticed chemotherapy can increase the risk for SMNs and cardiovascular disease (CVD), while radiation therapy greatly increases the risk for SMNs but not necessarily CVD. 

Caption: Graph visualization of cancer risk – blue line is depicting the risk of all cancers after a seminomatous TC diagnosis, compared to the green line representing general population risk of a seminomatous cancer. The red line depicts risk of all cancers after a non-seminomatous TC diagnosis, compared to the general population risk of non-seminomatous TC. 

Secondary Malignant Neoplasms:

Healthcare professionals and researchers have long known that radiation and certain chemicals are able to cause cancer. Secondary malignant neoplasms (SMNs), as they have been termed, are when cancerous cells form outside of the original cancerous organ tissues, because of chemotherapy and/or radiation therapy. In a study published by Bokemeyer and Schmoll, research has suggested, “Radiotherapy is associated with a two- to threefold increased risk for secondary solid tumors” [4]. More recent studies have shown that patients exposed to higher amounts of radiation during treatment are more likely to develop SMNs than patients with lower amounts of exposure, since healthy tissues near cancer cells are exposed to high amounts of radiation at the time of treatment as well [4, 12]. Incidences of SMNs in TC survivors are especially noticeable and impactful; the younger age elicits more time for secondary cancers to occur post-treatment. While it seems that little can be done to combat the development of SMNs caused by radiation therapy during treatment, changes have been made in recent years. When doctors administer treatment of TC, if requiring radiation therapy, they aim to use minimal doses of radiation, and have completely stopped radiation therapy concentrated in the chest area in the past several years [13, 14]. While follow up appointments had been established before the long term effects of TC treatment have been quantified, follow up appointments are now taken more seriously and are being continued for a longer period of time following treatment, with a larger emphasis on secondary cancers.

Chemotherapy has also seen correlation with development of SMNs after initial treatment. Two of the drugs used in TC treatment, etoposide and cisplatin, have caused secondary malignancies to arise even in treatment of cancers other than TC. Researchers have come to agree that the impact of chemotherapy is less than radiation therapy in terms of development of SMNs. Decades old research has confirmed the correlation between complications following treatment and >4 cycles of cisplatin-based therapy [15]. Cisplatin in TC treatment has specifically been known to lead to increased risk of leukemia and myelodysplastic syndrome, both of which are related to complications to blood-forming cells in bone marrow. In another study, researchers equated the effects of chemotherapy and radiation therapy on SMNs and CVD to be similar to smoking, a well known carcinogen (cancer causing agent) [13, 14]. While the individual effects of both chemotherapy and radiation therapy regarding the development of SMNs have been documented, there have been few studies that differentiate the effects of each when both treatment options are combined. Future research surveying older survivors with more long-term effects could be the key to optimizing TC treatment when decreasing SMNs.  

Cardiovascular Disease:

While both chemotherapy and radiation therapy have documented effects of SMNs, radiation therapy has not been connected to greater risk of CVD. However, among the various negative effects of chemotherapy, CVD has been one of the most important causes of premature death in TC survivors.  In chemotherapy, cisplatin and bleomycin are heavy metals that with repetitive use, can build up in and weaken heart muscles, as well as cause hearing impairment and infertility [15, 16]. While the effects of bleomycin are similar to other heavy metals used in chemotherapy, cisplatin specifically damages mitochondrial or nuclear DNA of certain cells. This causes mammalian cells using ATP respiration in the mitochondria to create reactive oxygen species (ROS). ROS are unstable molecules formed from O2 that can then cause damage or cell death when reacted [17]. With repeated chemotherapy treatment, cisplatin can build up in certain areas, and is often the cause of side effects such as hearing loss (cochlea), hair loss (hair follicles), and CVD (inner linings of arteries). However, specifically why cisplatin builds up in certain tissues is currently unknown [11]. 

Cases of CVD from cancer treatment can especially be seen in TC because of the younger age of TC patients. Surviving TC at a mean age of 37 could mean living with an increased risk of CVD for upwards of 40 years. Researchers have attempted to substitute cisplatin with a similar compound called carboplatin in hopes of decreasing the known long-term effects. However, carboplatin has never produced efficient cure rates even while decreasing nerve and hearing damage [18, 15]. As stated before, it is unclear whether the long-term effects of cisplatin-based chemotherapy outweigh the monumental success rate of treatment of TC, simply because of the lack of data available from long-term survivors. Eventual data of TC survivors could help determine longer-term impacts of cisplatin in chemotherapy, and help with discussions regarding a need for finding a substitution for cisplatin.

Conclusion:

In regards to the post-diagnosis 5 year survival rate of testicular cancer patients, testicular cancer is one of the most survivable cancer types. However, the abnormally young age of TC patients allows us to more easily see the long-term effects of more advanced stage cancer treatment. Both radiation therapy and chemotherapy have been documented to increase risk for secondary malignant neoplasms, with chemotherapy also leading to a large variety of complications, including infertility, low testosterone, and cardiovascular diseases. While certain studies have shown such adverse effects in TC treatment, not enough data has been gathered from treated TC patients who have lived through a larger period of time since treatment. With future studies, researchers could discern the need for alternative treatments to testicular cancer, or methods to prevent the harmful effects of current testicular cancer treatment.

 

References:

  1. Leading Causes of Death. (2022, January 13). Centers for Disease Control and Prevention. https://www.cdc.gov/nchs/fastats/leading-causes-of-death
  2. Khan, O., & Protheroe, A. (2007). Testis Cancer. Postgraduate Medical Journal. Volume 83 (Issue: 984). https://doi.org/10.1136/pgmj.2007.057992
  3. Toni, L. D., ŠAbovic, I., Cosci, I., Ghezzi, M., Foresta, C., & Garolla, A. (2019). Testicular Cancer: Genes, Environment, Hormones. Frontiers in Endocrinology. https://doi.org/10.3389/fendo.2019.00408
  4. Bokemeyer, C., & Schmoll, H. J. (1995). Treatment of testicular cancer and the development of secondary malignancies. Journal of Clinical Oncology. Volume 13 (Issue: 1) https://doi.org/10.1200/jco.1995.13.1.283
  5. Virginia Cancer Institute. (n.d.). Secondary Malignancies. Retrieved March 7, 2022, from https://www.vacancer.com/diagnosis-and-treatment/side-effects-of-cancer/secondary-malignancies/
  6. Travis, L. B., Beard, C., Allan, J. M., Dahl, A. A., Feldman, D. R., Oldenburg, J., Daugaard, G., Kelly, J. L., Dolan, M. E., Hannigan, R., Constine, L. S., Oeffinger, K. C., Okunieff, P., Armstrong, G., Wiljer, D., Miller, R. C., Gietema, J. A., Leeuwen, F. E., Williams, J. P., . . . Fossa, S. D. (2010). Testicular Cancer Survivorship: Research Strategies and Recommendations. Journal of the National Cancer Institute. Volume 102 (Issue: 15). https://doi.org/10.1093/jnci/djq216
  7. Johns Hopkins Medicine. (n.d.). Types of Testicular Cancer. Retrieved March 7, 2022, from https://www.hopkinsmedicine.org/health/conditions-and-diseases/testicular-cancer/types-of-testicular-cancer
  8. The American Cancer Society medical and editorial content team. (2019, September 4). Treatment Options for Testicular Cancer, by Type and Stage. American Cancer Society. Retrieved March 7, 2022, from https://www.cancer.org/cancer/testicular-cancer/treating/by-stage.html
  9. Langmaid, S. (2016, November 28). Stages of Cancer. WebMD. Retrieved March 7, 2022, from https://www.webmd.com/cancer/cancer-stages
  10. Association of Population-based Cancer Registries in Germany & German Centre for Cancer Registry Data at the Robert Koch Institute. (2021, April 26). Testicular cancer. Zentrum Für Krebsregisterdaten. Retrieved March 7, 2022, from https://www.krebsdaten.de/Krebs/EN/Content/Cancer_sites/Testicular_cancer/testicular_cancer_node.html
  11. Murphy, M. P. (2009, January 1). How mitochondria produce reactive oxygen species. Biochemical Journal. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2605959/
  12. Curreri, S. A., Fung, C., & Beard, C. J. (2015). Secondary malignant neoplasms in testicular cancer survivors. Urologic Oncology: Seminars and Original Investigations. https://doi.org/10.1016/j.urolonc.2015.05.002
  13. The American Cancer Society medical and editorial content team. (2020, June 9). Second Cancers After Testicular Cancer. American Cancer Society. Retrieved March 7, 2022, from https://www.cancer.org/cancer/testicular-cancer/after-treatment/second-cancers.html
  14. van den Belt-Dusebout, A. W., de Wit, R., Gietema, J. A., Horenblas, S., Louwman, M. W. J., Ribot, J. G., Hoekstra, H. J., Ouwens, G. M., Aleman, B. M. P., & van Leeuwen, F. E. (2006). Treatment-specific risks of second malignancies and cardiovascular disease in 5-year survivors of testicular cancer. Journal of Clinical Oncology. Volume 25 (Issue: 28) https://doi.org/10.1200/jco.2006.10.5296
  15. Moul, J. W., Robertson, J. E., George, S. L., Paulson, D. F., & Walther, P. J. (1989). Complications of Therapy for Testicular Cancer. The Journal of Urology. Volume 142 (Issue: 6) https://doi.org/10.1016/S0022-5347(17)39135-8
  16. Travis, L. B., Fosså, S. D., Schonfeld, S. J., McMaster, M. L., Lynch, C. F., Storm, H., Hall, P., Holowaty, E., Andersen, A., Pukkala, E., Andersson, M., Kaijser, M., Gospodarowicz, M., Joensuu, T., Cohen, R. J., Boice, J. D., Dores, G. M., & Gilbert, E. S. (2005). Article Navigation Second Cancers Among 40 576 Testicular Cancer Patients: Focus on Long-term Survivors. Journal of the National Cancer Institute. https://doi.org/10.1093/jnci/dji278
  17. Breglio, A. M, et al. (2017, November 21). Cisplatin is retained in the cochlea indefinitely following chemotherapy. Nature Communications. https://www.nature.com/articles/s41467-017-01837-1?error=cookies_not_supported&code=95f5244f-996e-42ef-845e-9ca9f624b375
  18. Haugnes, H. S., Bosl, G. J., Boer, H., Gietema, J. A., Brydøy, M., Oldenburg, J., Dahl, A. A., Bremnes, R. M., & Fosså, S. D. (2012). Long-Term and Late Effects of Germ Cell Testicular Cancer Treatment and Implications for Follow-Up. Journal of Clinical Oncology. Volume 97 (Issue: 18) https://doi.org/10.1200/JCO.2012.43.4431
  19. Baird, D. C., Meyers, G. J., Darnall, C. R., & Hu, J. S. (2018). Testicular Cancer: Diagnosis and Treatment. American Family Physician. https://www.aafp.org/afp/2018/0215/p261.html
  20. Smith, A. (2019, December). The Long Haul: Facing the Long-Term Side Effects of Testicular Cancer. Cure Media. Retrieved March 7, 2022, from https://www.curetoday.com/view/the-long-haul-facing-the-long-term-side-effects-of-testicular-cancer

650-million year old enzyme used to target cell death in cancer cells

By Vishwanath Prathikanti, Anthropology, ‘23

Author’s note: As someone studying Anthropology at Davis, I often see my friends confused when I tell them how much of my studies consist of biology and chemistry. It’s a fairly common conception that Anthropologists mainly study human culture, and while cultural anthropology is an important aspect of the field, it is still only a part of it. When I heard about how our ancestors’ enzymes are being used to advance our knowledge of cancer, I knew it could be an opportunity to change the perception of Anthropology among students.

 

Most people have a general understanding of how cancer works: it occurs when apoptosis, or cell death, does not occur in cells. These cells start to propagate, and then aggregate into tumors. The tumors can spread across the body and lead to varying health complications depending on if they are benign (isolated to a part of the body) or malignant (spread to other areas). Naturally, one possible solution would be to fix the part in cancer cells that prevent them from properly dying. So how does a cell die?

Apoptosis hinges on enzymes called effector caspases, which deactivate proteins that carry out normal cellular processes, activate nucleases and kinases that are used to break down DNA, and disassemble various components of the cell [1]. So to cause cell death in cancer cells, scientists would need to activate caspases. Activating these caspases would affect all cells, not just cancerous ones. The challenge scientists face is activating caspases in cancer cells without impacting healthy surrounding cells. Unfortunately, to activate effector caspases in just cancerous cells requires an intimate knowledge of the different proteins that comprise the caspase family, something the scientific community lacks.

In an effort to learn more about the structure of caspases, Suman Shrestha and Allan C. Clark from the University of Texas at Arlington decided to look to the past rather than just the present. Specifically, they wanted to analyze the folding mechanisms and structure of effector caspases and construct a picture of how they operated for our ancestors [2]. 

A recent trend in evolutionary biology and physical anthropology has been comparing various proteins and their folding structures across other organisms today and reconstructing what these proteins looked like for our ancestors [3]. This is carried out via a computer program that generates a phylogenetic tree of a protein family, a process known as ancestral sequence reconstruction (ASR). After the phylogeny is generated, the ASR program will statistically infer where certain proteins changed or emerged in the tree [4]. This is done by comparing binding sites in proteins. The program will identify various binding sites that are described as “ambiguous sites,” when a node (branching point in a phylogenetic tree) has a <70% probability of being accurate [5]. In caspases, this ambiguity is generally due to one of two possibilities. One is that there is nearly a 50/50 chance an identified ancestral protein led to the extant version, or another identified protein. The second possibility is that the binding site has a high mutation rate, lowering the probability that it has been characterized correctly [5]. As for the other sites, different ASR programs have slightly different levels of accuracy, but the most prominently used ones have around a 90-93% chance that every non-ambiguous site is accurate [8]. Finally, using protein sequences of the organisms alive today and the phylogeny that depicts their ancestors, the ASR program can present the most likely sequence of the protein at a particular node in the phylogeny [6].

 

Caption: The ASR process will generate the phylogeny (C) as well as the sequences and order of sequences provided those of extant species are provided to the program (D) [4].

 

Using ASR, Shrestha and Clark discovered effector caspases first evolved in a common ancestor more than 650 million years ago (mya) when microorganisms and sponges dominated life. While ASR can’t identify the species of the organism, it can generate the predicted sequences of these ancient caspases. This is all they need to recreate these proteins and better understand how these caspases function under healthy conditions versus cancerous ones [2, 7]. 

Among the 12 proteins that make up the caspase family, Shrestha and Clark decided to reconstruct the ancestor of three specific ones: caspase-3, -6, and -7 [7]. These three caspases were chosen because they are specifically responsible for cell death, whereas the others are linked to inflammation or activation of other enzymes [7, 8]. After sequencing the proteins, Shrestha and Clark were able to identify changes in the folding structures and sequences that could activate effector caspases in tumor cells without triggering cell death in healthy cells.

Specifically, they confirmed two folding precursors in the creation of caspase-6 and -7 proteins. While these precursors had already been discovered in caspase-3, the discovery was significant in understanding how the caspases worked in a normal cell and how they were altered in a cancer cell. Shrestha and Clark noted mutations that slow the formation of these precursors, which led to the production of caspases greatly slowing down, causing a cell to not die when it needs to [2]. Understanding this regulatory process may allow researchers to discover a way to reactivate caspase production in cancer cells.

The vast majority of data collected in the study was information on how stable these proteins are and where they evolved since our common ancestor 650 mya. They found that caspase-6 was the most stable out of the three, and at lower pH’s, caspase-6 is the only one that does not unfold irreversibly [2]. This suggests a more specialized role for caspase-6 compared to 3 and 7, and the data may be useful for the adaptation of cancer-targeting drugs. For example, if a cancer aggregate is in a low pH environment of the body such as the stomach, a cancer-targeting drug may utilize caspase-6 specifically to activate programmed cell death.

While the results are still fairly recent and have not had adequate time to be implemented into a treatment, Morteza Khaledi, dean of the College of Science at the University of Texas at Arlington, was incredibly excited about the results. In a press statement to the University of Texas at Arlington, he announced that the research had yielded “vital information about the essential building blocks for healthy human bodies” and that the knowledge gained from the study will be seen as “another weapon in our fight against cancer” [7].

 

References:

  1. https://www.sciencedirect.com/topics/medicine-and-dentistry/effector-caspase 
  2. https://www.sciencedirect.com/science/article/pii/S0021925821010528?via%3Dihub 
  3. https://www.nature.com/articles/nrg3540 
  4. https://onlinelibrary.wiley.com/doi/10.1002/bip.23086 
  5. https://www.sciencedaily.com/releases/2022/01/220112094022.htm 
  6. https://link.springer.com/chapter/10.1007/978-1-4614-3229-6_4?utm_source=getftr&utm_medium=getftr&utm_campaign=getftr_pilot 
  7. https://portlandpress.com/biochemj/article/476/22/3475/221018/Resurrection-of-ancestral-effector-caspases 
  8. https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.0020069#:~:text=The%20ML%20method%20was%20the,average%20accuracy%20is%200.4%25
  9. https://www.sciencedirect.com/science/article/pii/S2213020916302336

Clearing the Cellular Landfill: The Use of Chaperone-Mediated Autophagy to Treat Alzheimer’s Disease

By Reshma Kolala, Microbiology, ‘22

Author’s Note: Alzheimer’s disease is one of the most common neurodegenerative disorders, affecting nearly 1 in 9 individuals aged 65 or older. Available current therapies fail to address the underlying pathophysiology of the disorder, focusing on the amelioration of neuronal symptoms that result from Alzheimer’s disease. I was immediately intrigued by the proposal that an existing mechanism for cellular waste removal, chaperone mediated autophagy, could be reinvigorated to remove toxic protein aggregates that are characteristic of an Alzheimer’s diagnosis, thereby targeting a significant contributing factor to the disease. This finding paves the way for new therapies that prevent or delay the onset of Alzheimer’s disease.

 

“Imagine someone has taken your brain and it’s an old file cabinet and spread all the files over the floor, and you have to put things back together,” describes Greg O’Brian, an award-winning political writer who was diagnosed with Alzheimer’s Disease in 2010. The disorienting feeling described by O’Brian in an interview with Medical Daily is familiar for those diagnosed with Alzheimer’s disease. “My former life no longer exists, and it’s up to me to create a new life,” explains Chris Hannafan in an interview with PBS Newshour, a year after his Alzheimer’s diagnosis. Alzheimer’s disease is a progressive brain disorder that leads to memory loss, developmental disabilities, and cognitive impairment. The cause of the disorder appears to be a culmination of a variety of complex factors that arise as we age, such as the degeneration of neuronal pathways, immune system dysfunction, the buildup of β-amyloid protein, among others [1]. Due to its composite nature, a cure for the neurodegenerative disorder continues to elude the scientific community and treatment remains focused on palliative care, medical care that is focused on relieving and mediating the symptoms of the disorder.  

Of the several factors that contribute to an Alzheimer’s diagnosis, scientists have recently focused on a cellular process known as chaperone-mediated autophagy (CMA). Autophagy is a critical and versatile cellular mechanism that allows our cells to degrade or eliminate any unnecessary or damaged components [2]. “Without autophagy, the cell won’t survive,” notes Juleen Zierath, a physiologist at the Karolinska Institute in Stockholm, in an interview with Nature. The autophagy process may vary in each cell and is tailored to meet the demands of a specific cells’ workload. CMA refers to a specific form of autophagy that maintains the delicate balance of proteins in the brain through the use of chaperones. Chaperones or cellular “helpers,” lock onto faulty proteins to prevent buildup before being degraded by the cell. Similar to other cellular processes in our body, CMA is naturally less efficient as we age. This may be attributed to the accumulation of dysfunctional proteins and a compromised ability to respond to stressors over time [2]. When the age-dependent decline of CMA is paired with a neurodegenerative disorder such as Alzheimer’s disease, it has been proposed that the age-related inefficiency of CMA is accelerated. This leads to toxic aggregations, or clumps, of damaged proteins that upset the balance of proteins in the brain and entrap functional proteins, generating more blockage. Without CMA, our cell’s cleanup mechanism, this cellular landfill continues to build and begins to interfere with other critical biological processes.

In April 2021, Dr. Ana Maria Cuervo and her team of researchers at the Albert Einstein College of Medicine published a breakthrough study that investigated the relationship between inefficient CMA and the progression of neurodegenerative diseases in a mouse model of Alzheimer’s disease [3]. Cuervo, the co-study leader and co-director of the Institute for Aging Research at Einstein, noted that “these [mice], similar to the [Alzheimer’s] patients, have decreased memory, develop depression and [have] lack of engagement in general.” Using these mouse models, the first step of the study was to confirm that CMA does, in fact, have an impact on the balance of proteins in the brain. To investigate this, researchers generated a CMA-deficient mouse model through knockout, or removal, of the gene that encodes CMA. When compared against the mice with normal CMA levels, the CMA-deficient mice exhibited characteristics that align with rodent models of Alzheimer’s disease. These characteristics included reduced short-term memory, abnormal motor skills, and other dysfunctional behaviors. By interfering with the cells’ ability to regulate proteins, this finding proves that the proper balance of proteins in the brain contributes to the maintenance of stable neurological function.

The link between CMA deficiency and abnormal neurological symptoms may also be reversed, further emphasizing the importance of CMA in the brain. In a second experiment, researchers examined whether they could observe deficient CMA in mice that were already diagnosed with early Alzheimer’s disease. The results revealed lower levels of CMA activity in the mice that were afflicted with early Alzheimer’s disease. Ultimately, these findings indicated that in the early stages of Alzheimer’s disease, CMA activity is decreased and is likely contributing to the harm caused by aggregated proteins.

With a more concrete understanding of how CMA plays a role in neurological disorders, Cuervo and her team of researchers created a drug that could be used to treat the CMA-related symptoms observed in Alzheimer’s disease. Her vision for this new drug was that “if [we] could increase the removal of these proteins or the cleaning process that occurs normally inside the brain, it might be enough to eliminate toxic proteins.” This pharmaceutical re-energizes a component of the CMA apparatus, allowing a more efficient clearing of protein debris that may otherwise create blockages and eventually manifest in neurological symptoms. In a typical Alzheimer’s patient, “the sheer amount of defective protein overwhelms CMA and essentially cripples it,” Cuervo continues. Essentially, since the levels of faulty protein are markedly higher in Alzheimer’s patients, the CMA process must be functioning optimally. This new drug, CA, acts as a CMA enhancer by interacting with a

receptor, a type of cellular gatekeeper. In a healthy individual, chaperones, or cellular “helpers,” lock onto faulty proteins and guide them to a specific compartment within the cell, the lysosome. A single cell can have hundreds of lysosomes, each of which is tightly sealed from the rest of the cell due to its highly acidic contents. Once the chaperone, faulty protein in hand, has reached the lysosome, it docks to the compartment and releases the protein into the lysosome to be digested. The entry of the faulty proteins into the lysosome is monitored by various cellular gatekeepers present on the surface of the lysosome, one of which is the LAMP2A receptor. Throughout one’s life, the production of the LAMP2A receptor is constant. However, with age, the deterioration of LAMP2A receptors is accelerated. CA specifically targets this challenge by “[restoring] LAMP2A to youthful levels, enabling CMA to get rid of defective proteins so they can’t form those toxic protein clumps” as explained by Cuervo. By increasing the number of LAMP2A receptors, or gates, on the lysosomal surface, researchers were able to increase the channeling of faulty proteins into the lysosome which acts as the garbage disposal of the cell.

This new treatment, despite still being in its early stages of testing, provides an optimistic glance at potentially revolutionizing treatment for those suffering from neurological disorders that are caused by protein aggregation. Although it may be a while before Alzheimer’s patients are free from the daily burden of reorganizing their mental file cabinets, this study sheds light on a previously underscored cellular process and reveals a new avenue for Alzheimer’s research to explore. As Dr. Cuervo concludes, “this [finding] can be considered as an important step forward, or as a very good proof that enhancing cellular cleaning can be a way to develop therapeutics or interventions that can cure Alzheimer’s disease.”

 

References

  1. Armstrong RA (2013). What causes alzheimer’s disease?. Folia neuropathologica, 51(3), 169–188. https://doi.org/10.5114/fn.2013.37702
  2. Bejarano E & Cuervo, AM (2010). Chaperone-mediated autophagy. Proceedings of the American Thoracic Society, 7(1), 29–39. https://doi.org/10.1513/pats.200909-102JS

Bourdenx M, Martín-Segura A, Scrivo A, Rodriguez-Navarro JA, Kaushik S, Tasset I, Diaz A, Storm NJ, Xin Q, Juste YR, Stevenson E, Luengo E, Clement CC, Choi SJ, Krogan NJ, Mosharov EV, Santambrogio L, Grueninger F, Collin L, Swaney DL, Cuervo, AM. (2021). Chaperone-mediated autophagy prevents collapse of the neuronal metastable proteome. Cell, 184(10), 2696–2714.e25. https://doi.org/10.1016/j.cell.2021.03.048

How Poop is Fighting COVID-19

By Laura Gardner, Biochemistry and Molecular Biology ‘22

Author’s Note: With so much information in the media and online about COVID-19, I find many people get lost in, and fall victim to, false information. I want to reassure the Davis community with factual information on how Davis is fighting COVID-19. With UC Davis’ strong scientific community, I was curious what tools were being used to mitigate the spread of  COVID-19. In January 2021, I attended a virtual COVID-19 symposium called Questions about Tests and Vaccines led by Walter S Leal, distinguished Professor of the Department of Molecular and Cellular Biology at University of California-Davis (UC Davis). In this symposium, I learned about Dr. Heather Bischel’s work testing the sewer system. This testing is another source for early detection of COVID-19. In combination with biweekly testing, I have no doubt that UC Davis is being proactive in their precautions throughout the pandemic, which made me personally feel more safe. I hope that this article will shed light on wastewater epidemiology as a tool that can be implemented elsewhere.

 

Dr. Heather Bischel is an assistant professor in the Department of Civil and Environmental Engineering at the University of California, Davis. Bischel has teamed up with the city of Davis through the Healthy Davis Together initiative to use wastewater epidemiology, a technique for measuring chemicals in wastewater, to monitor the presence of SARS-CoV-2, the virus that causes COVID-19 [6]. When a person defecates, their waste travels through the pipes and is collected in the sewer system. In both pre-symptomatic and asymptomatic individuals, their feces will carry the genetic material that indicates the virus is present. This is because SARS-CoV-2 uses angiotensin-converting enzyme 2, also known as ACE2, as a cellular receptor, which is abundantly expressed in the small intestine allowing viral replication in the gastrointestinal tract [1]. This serves as an early indicator of a possible COVID-19 outbreak and leads to quick treatment and isolation, which are important to stop the spread of the disease.  

Samples are taken periodically from manholes around campus using a mechanical device called an autosampler. These autosamplers are lowered into manholes to collect wastewater flow samples every 15 minutes for 24 hours. Next, the samples are taken to the lab where they are able to extract genetic material and use Polymerase Chain Reaction (PCR) to detect the virus. Chemical markers that attach to the specific genetic sequence of the virus are added to the sample, which reacts to the COVID-19 virus by fluorescing visible light. This light is the signal that indicates positive test results. 

The samples are collected throughout campus, with a focus on residential halls. An infected person will excrete the virus through their bowel movements before showing symptoms. The samples are so sensitive that if even just one person among thousands is sick, they are still able to detect the presence of COVID-19 genetic material.  When a PCR test provides a positive signal, the program works closely with the UC Davis campus to identify if there has been someone who has reported a positive COVID-19 test. If no one from the building is known to be positive, they send out a communication email asking all the students of the building to get tested as soon as possible. That way the infected person can be identified and isolated as soon as possible, eliminating exposure from unidentified cases [4].

In collaboration with the UC Davis campus as well as the city of Davis, Dr. Bische has implemented wastewater epidemiology throughout the community. Since summer 2020, Dr. Bische’s team of researchers have collected data which is available online through the Healthy Davis Together initiative [4].  

In addition to being an early indicator, this data has also been used to determine trends, which can indicate if existing efforts to combat the virus are working or not [2]. Existing efforts include vaccinations, mask wearing, washing hands, maintaining proper social distancing, and staying home when one feels ill. UC Davis has implemented protocols including biweekly testing and a daily symptom survey that must be completed and approved in order to be on campus.

Wastewater epidemiology has been implemented all over the world, at more than 233 Universities and in 50 different countries, according to monitoring efforts from UC Merced [3]. This testing has been used in the past to detect polio, but has never before been implemented on the scale of a global pandemic. Lacking infrastructure, such as ineffective waste disposal systems, open defecation, and poor sanitation pose global challenges, especially in developing countries [2].  Without tools for early detection, these communities are in danger of having an exponential rise in cases.

Our work enables data-driven decision-making using wastewater infrastructure at city, neighborhood, and building scales,” Dr. Bische stated proudly in her latest blog post [2]. These decisions are crucial in confining COVID-19 as we continue to push through the pandemic.

Summary of how wastewater epidemiology is used to fight COVID-19

 

References:

  1. Aguiar-Oliveira, Maria de Lourdes et al. “Wastewater-Based Epidemiology (WBE) and Viral Detection in Polluted Surface Water: A Valuable Tool for COVID-19 Surveillance-A Brief Review.” International journal of environmental research and public health vol. 17,24 9251. 10 Dec. 2020, doi:10.3390/ijerph17249251
  2. Bischel, Heather. Catching up with our public-facing COVID-19 wastewater research. Accessed August 15, 2021.Available from H.Bischel.faculty.ucdavis
  3. Deepshikha Pandey, Shelly Verma, Priyanka Verma,et al. SARS-CoV-2 in wastewater: Challenges for developing countries, International Journal of Hygiene and Environmental Health,Volume 231,2021,113634, ISSN 1438-4639, https://doi.org/10.1016/j.ijheh.2020.113634.
  4. Healthy Davis Together. Accessed  February 2, 2021. Available from Healthy Davis Together – Working to prevent COVID-19 in Davis
  5. UCMerced Researchers. Covid Poops Summary of Global SARS-CoV-2 Wastewater Monitoring Efforts. Accessed February 2, 2021. Available from COVIDPoops19 (arcgis.com) 
  6. Walter S Leal. January 13, 2021. COVID symposium Questions about Tests and Vaccines. Live stream online on zoom.

The Human-Animal Interface: Exploring the Origin, Present, and Future of COVID-19

By Tammie Tam, Microbiology ‘22

Author’s Note: Since taking the class One Health Fundamentals (PMI 129Y), I have been acutely aware of this One Health idea that the health of humankind is deeply intertwined with the health of animals and our planet. This COVID-19 pandemic has been a perfect model as a One Health issue. Through this article, I hope to introduce readers to a fuller perspective of COVID-19 as a zoonotic disease. 

 

The COVID-19 pandemic has escalated into a human tragedy, measured daily by an increasing number of infection cases and a piling death toll. Yet, to understand the current and future risks of the SARS-CoV-2 virus, one must account for the virus’s relationship with animals in the context of its zoonotic nature, as the transmission between animals and humans is often overlooked. Uncovering the range of intermediary hosts of the virus may provide clues to the virus’s origin, point to potential reservoirs for a mutating virus, and help inform future public health policies. As a result, a small but growing body of researchers is working to predict and confirm potential human-animal transmission models.

The origin of the SARS-CoV-2

Currently, the World Health Organization (WHO) and other disease detectives are still working to unravel the complete origin of the virus. Scientists have narrowed down the primary animal reservoir for the virus through viral genomic analysis, between strains of human and animal coronaviruses [1]. They suspect bats to be the most likely primary source of the virus because the SARS-CoV-2 strain is a 96.2 percent match for a bat coronavirus, bat-nCoV RaTG13 [1]. Despite the close match, the differences in key surface proteins between the two viruses are distinct enough to suggest that the bat coronavirus had to have undergone mutations through one or more intermediary hosts in order to infect humans [2]. 

To identify potential intermediate hosts, scientists are examining coronaviruses specific to different animal species [1]. If SARS-CoV-2 is genetically similar to another animal-specific coronavirus, SARS-CoV-2 may also possess similar viral proteins to the animal-specific coronaviruses. With similar proteins, similar host-virus interactions can theoretically take place, allowing for SARS-CoV-2 to infect the animal in question. For example, besides bats, a pangolin coronavirus, pangolin-nCoV, has the second highest genetic similarity to SARS-CoV-2, which positions the pangolin as a possible intermediate host [3]. Because of the similarity, viral proteins of the pangolin coronavirus can interact with shared key host proteins in humans just as strongly as in pangolin [4]. However, more epidemiological research is needed to determine whether a pangolin had contracted coronavirus from a human or a human had contracted coronavirus from a pangolin. Alternatively, the intermediate host could have been another animal, but there are still no clear leads [1]. 

What it takes to be a host for SARS-CoV-2

In any viable host, the SARS-CoV-2 virus operates by sneaking past immune defenses, forcing its way into cells, and co-opting the cell’s machinery for viral replication [5]. Along the way, the virus may acquire mutations—some deadly and some harmless. Eventually, the virus has propagated in a high enough quantity to jump from its current host to the next [5].

Most importantly for the virus to infect a host properly, the virus must recognize the entranceway into cells quickly enough before the host immune system catches on to the intruder and mounts an attack [5]. SARS-CoV-2’s key into the cell is through its spike glycoproteins found on the outer envelope of the virus. Once the spike glycoproteins interact with an appropriate angiotensin-converting enzyme 2 (ACE2) receptor found on the host cell surfaces, the virus blocks the regular functions of the ACE2 receptor, such as regulating blood pressure and local inflammation [6,7]. At the same time, the interaction triggers the cell to take in the virus [5]. 

Since the gene encoding for the ACE2 receptor is relatively similar among humans, the virus can travel and infect the human population easily. Likewise, most animals closely related to humans like great apes possess a similar ACE2 receptor in terms of structure and function, which allows SARS-CoV-2 a path to hijack the cells of certain non-human animals [8]. Despite the overall similar structure and function, the ACE2 receptor varies between animal species at key interaction sites with the spike glycoproteins due to natural mutations that are kept to make the ACE2 receptor the most efficient in the respective animal. Thus, while there are other proteins involved in viral entry into the host cells, the ACE2 receptor is the one that varies between animals and most likely modulates susceptibility to COVID-19 [9]. 

As a result, scientists are particularly interested in the binding of the ACE2 receptor with the viral spike glycoprotein because of its implications for an organism’s susceptibility to COVID-19. Dr. Xuesen Zhao and their team from Capital Medical University examined the sequence identities and interaction patterns of the binding site between ACE2 receptors of different animals and the spike glycoproteins of the SARS-CoV-2 [10]. They reasoned that the more similar the ACE2 receptor of an animal is to humans, the more likely the virus could infect the animal. For example, they found ACE2 receptors of rhesus monkeys, a closely related primate, had similar interaction patterns as humans [10]. Meanwhile, they found rats and mice to have dissimilar ACE2 receptors and poor viral entry [10].

While entrance into the cell is a major part of infection, there are other factors to also consider, such as the ability for viral replication to subsequently take place [11]. With so many different organisms on the planet, the models simply provide a direction for where to look next. SARS-CoV-2 is unable to replicate efficiently in certain animals despite having the entrance key to get in. For example, the virus is able to replicate well in ferrets and cats, making them susceptible to the virus [12]. In dogs, the virus can only weakly replicate. Meanwhile in pigs, chickens, and ducks, the virus is unable to replicate [12]. Outside of the laboratory, confirmed cases in animals include farm animals such as minks; zoo animals such as gorillas, tigers, lions, and snow leopards; and domestic animals such as cats and dogs [13].

The future for SARS-CoV-2

Due to the multitude of intermediary hosts, COVID-19 is unlikely to disappear for good even if every person is vaccinated [14]. Viral spillover from human to animal can spill back to humans. Often, as the virus travels through a new animal population, the virus population will be subjected to slightly different pressures and selected for mutations that will confer a favorable advantage for virus survival and transmission within the current host population [15]. Sometimes, this could make the virus weaker in humans. However, there are times when the virus becomes more virulent and dangerous to humans if it spills back over from the animal reservoir [15]. Consequently, it is important to understand the full range of hosts in order to put in place preventative measures against viral spillover. 

As of now, most of the known susceptible animals usually do not get severely sick with some known exceptions like minks [1]. Nevertheless, people must take precautions when interacting with animals, since research into this area is still developing and there are many unknown factors involved. This is especially important for endangered species to not become sick, because they already face other threats that make them vulnerable to extinction [8]. As a result, some researchers are taking it into their own hands to keep certain animals safe. For example, after the San Diego Zoo’s resident gorillas contracted COVID-19 in January, the zoo proactively began using the experimental Zoetis vaccine to vaccinate their orangutans and bonobos, which are great apes that are considered closely related to humans and susceptible to COVID-19 [16]. Due to an assumed COVID-19 immunity in the gorillas and a limited supply of the Zoetis vaccines, they decided to not vaccinate the gorillas [16]. Now, scientists are trying to modify the Zoetis vaccine for minks, because minks are very susceptible to severe symptoms from COVID-19 and have shown to be able to transmit the virus back to humans [17]. 

Besides the virus mutating into different variants through basic genetic mutations, people must be cautious of potential new coronaviruses which can infect humans [18]. The human population has encountered other novel coronaviruses over the past several years, so it is not out of the question. In animals, if two coronaviruses of a human and an animal infect the same animal host, it could cause a recombination event and create a new hybrid coronavirus [19]. 

For the SARS-CoV-2 virus, Dr. Maya Wardeh and their team at the University of Liverpool found over 100 possible host species where recombination events could take place [18]. These hosts are animals who can contract two or more coronaviruses with one of them being the SARS-CoV-2 virus. For instance, the lesser Asiatic yellow bat, a well-known host of several coronaviruses, is predicted to be one of these recombination hosts [18]. Also, species closer to home such as the domestic cat is another possible recombination host [18]. While it will take many different rare events, from co-infection to human interaction with the particular animal for recombination to be possible, scientists are on the lookout.

Even without a full picture, the Center for Disease Control (CDC) understands the potential risks of animal reservoirs and advises COVID-19-infected patients to stay away from animals—wildlife or domestic—to prevent spillover [20]. COVID-19 has also brought to light zoonotic disease risks from illegal animal trades and wet markets. Once research into the human-animal transmission model becomes more well-developed, public health officials will have a clearer picture as to how the pandemic spiraled to its current state and help develop policies to prevent it from happening again. 

 

References: 

  1. Zhao J, Cui W, Tian BP. 2020. The Potential Intermediate Hosts for SARS-CoV-2. Frontiers in Microbiology 11 (September): 580137. https://doi.org/10.3389/fmicb.2020.580137
  2. Friend T, Stebbing J. 2021. What Is the Intermediate Host Species of SARS-CoV-2? Future Virology 16 (3): 153–56. https://doi.org/10.2217/fvl-2020-0390.
  3. Lam TT, Jia N,  Zhang YW, Shum MH, Jiang JF,  Zhu HC,  Tong YG, et al. 2020. Identifying SARS-CoV-2-Related Coronaviruses in Malayan Pangolins. Nature 583 (7815): 282–85. https://doi.org/10.1038/s41586-020-2169-0
  4. Wrobel AG, Benton DJ, Xu P, Calder LJ, Borg A, Roustan C, Martin SR, Rosenthal PB, Skehel JJ, Gamblin SJ. 2021. Structure and Binding Properties of Pangolin-CoV Spike Glycoprotein Inform the Evolution of SARS-CoV-2. Nature Communications 12 (1): 837. https://doi.org/10.1038/s41467-021-21006-9
  5. Harrison AG, Lin T, Wang P. 2020. Mechanisms of SARS-CoV-2 Transmission and Pathogenesis. Trends in Immunology 41 (12): 1100–1115. https://doi.org/10.1016/j.it.2020.10.004
  6. Hamming I, Cooper ME, Haagmans BL, Hooper NM,Korstanje R, Osterhaus  ADME, Timens  W, Turner  AJ, Navis G, van Goor H. 2007. The Emerging Role of ACE2 in Physiology and Disease. The Journal of Pathology 212 (1): 1–11. https://doi.org/10.1002/path.2162.
  7. Sriram K, Insel PA. 2020. A Hypothesis for Pathobiology and Treatment of COVID‐19 : The Centrality of ACE1 / ACE2 Imbalance. British Journal of Pharmacology 177 (21): 4825–44. https://doi.org/10.1111/bph.15082
  8. Melin AD, Janiak MC, Marrone F, Arora PS, Higham JP. 2020. Comparative ACE2 Variation and Primate COVID-19 Risk. Communications Biology 3 (1): 641. https://doi.org/10.1038/s42003-020-01370-w
  9. Brooke GN, Prischi F. 2020. Structural and Functional Modelling of SARS-CoV-2 Entry in Animal Models. Scientific Reports 10 (1): 15917. https://doi.org/10.1038/s41598-020-72528-z.
  10. Zhao X, Chen D, Szabla R, Zheng M, Li G, Du P, Zheng S, et al. 2020. Broad and Differential Animal Angiotensin-Converting Enzyme 2 Receptor Usage by SARS-CoV-2. Journal of Virology 94 (18). https://doi.org/10.1128/JVI.00940-20.
  11. Manjarrez-Zavala MA, Rosete-Olvera DP, Gutiérrez-González LH, Ocadiz-Delgado R, Cabello-Gutiérrez C. 2013. Pathogenesis of Viral Respiratory Infection. IntechOpen. https://doi.org/10.5772/54287
  12. Shi J, Wen Z, Zhong G, Yang H, Wang C, Huang B, Liu R, et al. 2020. Susceptibility of Ferrets, Cats, Dogs, and Other Domesticated Animals to SARS–Coronavirus 2. Science 368 (6494): 1016–20. https://doi.org/10.1126/science.abb7015.
  13. Quammen D. And Then the Gorillas Started Coughing. The New York Times. Accessed February 19, 2021. Available from: https://www.nytimes.com/2021/02/19/opinion/covid-symptoms-gorillas.html
  14. Phillips N. 2021. The Coronavirus Is Here to Stay — Here’s What That Means. Nature 590 (7846): 382–84. https://doi.org/10.1038/d41586-021-00396-2
  15. Geoghegan JL, Holmes EC. 2018. The Phylogenomics of Evolving Virus Virulence. Nature Reviews Genetics 19 (12): 756–69. https://doi.org/10.1038/s41576-018-0055-5
  16. Chan S, Andrew S. 2021. Great Apes at the San Diego Zoo Receive a Covid-19 Vaccine for Animals. CNN. Accessed March 5, 2021. Available from: https://www.cnn.com/2021/03/05/us/great-apes-coronavirus-vaccine-san-diego-zoo-trnd/index.html
  17. Greenfield P. 2021. Covid Vaccine Used on Apes at San Diego Zoo Trialled on Mink. The Guardian.Accessed March 23, 2021. Available from: http://www.theguardian.com/environment/2021/mar/23/covid-vaccine-used-great-apes-san-diego-zoo-trialled-mink
  18. Wardeh M, Baylis M, Blagrove MSC. 2021. Predicting Mammalian Hosts in Which Novel Coronaviruses Can Be Generated. Nature Communications 12 (1): 780. https://doi.org/10.1038/s41467-021-21034-5.
  19. Pérez-Losada M, Arenas M, Galán JC, Palero F, González-Candelas F. 2015. Recombination in Viruses: Mechanisms, Methods of Study, and Evolutionary Consequences. Infection, Genetics and Evolution 30 (March): 296–307. https://doi.org/10.1016/j.meegid.2014.12.022.
  20. Centers for Disease Control and Prevention. 2020. COVID-19 and Your Health. Accessed February 11, 2020. Available from: https://www.cdc.gov/coronavirus/2019-ncov/daily-life-coping/animals.html.

Cover Art Guidelines for The Aggie Transcript’s 2020-2021 Print Edition

Thank you for submitting to The Aggie Transcript! The Editorial Board is excited to view your submission. All received submissions, pending board review, will be featured on our website and our social media. The winning art submission will be featured as the cover of our 2020-2021 print edition journal!

Here are a few guidelines on how to prepare and submit your art:

 

Size:

The cover dimensions are 8.5’’ x 11.’’ Your design does not need to fill the entire space, but it will be finally printed on a cover of this size, so please keep this in mind when designing your submission.

 

Requirements:

The Aggie Transcript is only accepting digital art for the 2020-2021 cover. This includes, but is not limited to:

  • graphic designs
  • vector drawings
  • digital/tablet drawings
  • digital renderings
  • photography

If any non-original sources are used (e.g. vector images, photographs, etc.), they must have a Creative Commons license. Here’s how you can check if your image has a CC license:

  • Go to Google Images.
  • Click Tools → Usage Rights → Creative Commons licenses.
  • Search for your topic (e.g. brain anatomy).
  • The resulting images are free to download and reuse, and you can select any of them to include in your work.

Alternatively, you can use Wikimedia, Pixabay, and other free stock image sites to find images. Please see below for additional resources. 

The cover art must be connected to science on the SARS-CoV-2 virus or COVID-19 pandemic in some way. We encourage you to be as creative as you’d like and interpret this however you see fit, but please note the following:

We are NOT looking for designs that solely feature renderings of the SARS-CoV-2 virus particle. Submissions like the following two examples will most likely not be selected as the winning cover submission. Your design can certainly include renderings of the virus particle, but it should not be the prominent image or feature of your design unless you have some unique take on it (digital collage, etc.)

 

 

Style Suggestions:

The 2020-2021 print edition will be using the following color palette. It is not required that you follow this color palette, but we suggest that you choose colors in your design that will complement our color palette (or use these exact colors):

 

Resources:

For your convenience, we have provided 2 templates that contain our logo and masthead. If you’d like, you can use these templates while designing your submission, so you can get an idea of how your design will look on the cover should it be selected. 

dark background.png

light background.png

 

Submitting:

When you are ready to submit, please upload your design sample WITHOUT the cover template provided to this Google form. Again, we should receive only your art, without the logo and masthead pasted on it. Please upload your design in the highest quality you are able to, and be sure to write an author’s note describing your submission. The deadline for submission is April 27th, at 12:00 p.m. PST (noon).

 

Additional Resources:

 

For more details on submitting writing, photography, and art (COVID or non-COVID-related), please visit our submissions page.

Fold@Home: Aid in COVID-19 Research from Home

Image via Folding@Home

By Nathan Levinzon, Neurobiology, Physiology, and Behavior ‘23

Author’s Note: The purpose of this article is to introduce and inform the UC Davis scientific community of Folding@Home; a distributed computing project that allows individuals and researchers to donate computing resources from their computers towards COVID-19 research.

Keywords: COVID-19, Folding@Home, Distributed Computing

 

Reports of localized viral pneumonia cases in the Chinese city of Wuhan began in December 2019, initially amounting to little concern for humanity. Since then, the world has drastically changed as a result of COVID-19. As of September 16, 2020, almost thirty million cases of COVID-19 have been confirmed across the globe, claiming the lives of close to one million individuals. In California alone, there have been seven hundred seventy thousand cases with close to twenty three thousand deaths as of September 28th [1]. Millions of individuals are currently under a government-mandated shelter-in-place order, forcing the lives of many to come to a standstill. In a statement made by UC Davis Chancellor May in March of this year, “much of [UC Davis’] research is ramping down, but when it comes to the coronavirus, our efforts continue apace” [2]. One such effort taking place at UC Davis is called Folding@Home (FAH), and it allows researchers to study the mechanisms of COVID-19 with nothing but a computer.

FAH originated as a project to study how protein structures interact with their environment. Currently, some proteins of particular interest to FAH are the constituents of the virus that causes COVID-19. Like other coronaviruses, SARS-CoV-2 has four types of proteins: the spike, envelope, membrane, and nucleocapsid proteins. Many copies of the spike protein protrude from the surface of the virus, where they wait to encounter Angiotensin-Converting Enzyme 2 (ACE2) on the surface of human cells [3]. In order to develop therapeutic antibodies or small molecules for the treatment of COVID-19, scientists need to better understand the structure of the viral spike protein and how it binds to the human enzymes required for viral entry into the cells.  Before the spike proteins on SARS-CoV-2 can function, they must first take on a particular structure, known as a ‘conformation’, through a process known as “protein folding.” As a result of the many factors that impact protein folding, like electrostatic interaction, especially the electrostatic interactions between amino acids and their environment,  research into therapeutics against COVID-19 first necessitates intensive computation in order to resolve protein structure [4]. Only after the proteins of SARS-CoV-2 are understood can the hunt for a cure begin.

FAH is able to study the complex phenomena of protein folding thanks to the computational power of distributed computing. A distributed computing project is a piece of software that allows volunteers to “donate” computing time from the Central Processing Units (CPUs) and Graphics Processing Units (GPUs) located in their personal computers towards solving problems that require significant computing power, like protein folding. In essence, FAH uses a personal computer’s computational resources while the computer is idle to perform calculations involving protein folding. This donated computing power is what forms the “nodes” within a greater cluster of other computers in a process known as “cluster computing.” FAH uses the cluster’s resources to run complex biophysical computer simulations in order to understand the complexities and outcomes of protein folding [5]. In this way, FAH brings together citizen scientists who volunteer to run simulations of protein dynamics on their personal computers. 

Studying protein folding via distributed computing has humble beginnings but has grown into a technology that has the potential to research even the most elusive proteins. First, established protein conformations are used by FAH as starting points for a set of simulation trajectories through a technique called ‘adaptive sampling.’ The theory behind adaptive sampling goes as follows: If a protein folds through the states A to B to C, researchers can calculate the length of the transition time between A and C by simulating the A to B transition and the B to C transition [6]. First, a computer simulates the initial conditions of a protein many times to determine the sample space of protein conformations. As the simulations discover more conformations, a Markov state model (MSM) is created and used to find the most dominant of protein conformations. The MSM represents a master equation framework: meaning that, in theory, the complete dynamics of a protein can be described using a single MSM [7]. The MSM method significantly increases the efficiency of simulation as it avoids unnecessary computation and allows for the statistical aggregation of short, independent simulation trajectories [8]. The amount of time it takes to construct an MSM is inversely proportional to the number of parallel simulations running, i.e., the number of CPUs and GPUs available [9]. At the end of computation, an aggregate final model of all the sample states is generated. This final model is able to illustrate folding events and dynamics of the protein, which researchers can use to study and discover potential binding sites for novel therapeutic compounds.

The power of FAH’s distributed computing in the hunt for a cure to COVID-19 grows with each computer on FAH’s network.ch citizen scientist who donates the power of their idle computer. Currently, pharmaceutical research in COVID-19 has been hindered by the fact that there are no obvious drug binding sites on the surface of the SARS-CoV-2 virus. This makes developing therapeutic remedies for COVID-19 a long, expensive process of ‘check and guess.’ However, there is promise: in the past, FAH’s simulations have captured motions in the proteins of the Ebola virus that create a potentially druggable site not otherwise observable[10]. Using the same methodology as they did for Ebola, FAH has now found similar events in the spike protein of SARS-CoV-2 and hopes to use this result and future results to one day produce a life-saving treatment for COVID-19. By downloading Folding@Home and selecting to contribute to “Any Disease”, anyone can help provide FAH-affiliated researchers with the computational power required to tackle this worldwide epidemic. For more information, refer to https://foldingathome.org/start-folding/.

 

References

  1. Smith, M., White, J., Collins, K., McCann, A., & Wu, J. (2020, June 28). Tracking Every Coronavirus Case in the U.S.: Full Map. The New York Times. https://www.nytimes.com/interactive/2020/us/coronavirus-us-cases.html
  2. May, G. S. (2020, March 20). Update on Our Response to COVID-19. UC Davis Leadership. https://leadership.ucdavis.edu/news/messages/chancellor-messages/update-on-our-response-to-covid19-march-20
  3. Astuti, I., & Ysrafil. (2020). Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2): An overview of viral structure and host response. Diabetes & Metabolic Syndrome: Clinical Research & Reviews. https://doi.org/10.1016/j.dsx.2020.04.020
  4. Sarah Everts. (2017, July 31). Protein folding: Much more intricate than we thought | July 31, 2017 Issue – Vol. 95 Issue 31 | Chemical & Engineering News. Cen.Acs.Org. https://cen.acs.org/articles/95/i31/Protein-folding-Much-intricate-thought.html
  5. About – Folding@home. (n.d.). Folding@Home. Retrieved June 28, 2020, from https://foldingathome.org/about/
  6. Bowman, G. R., Voelz, V. A., & Pande, V. S. (2011). Taming the complexity of protein folding. Current Opinion in Structural Biology, 21(1), 4–11. https://doi.org/10.1016/j.sbi.2010.10.006
  7. Husic, B. E., & Pande, V. S. (2018). Markov State Models: From an Art to a Science. Journal of the American Chemical Society, 140(7), 2386–2396. https://doi.org/10.1021/jacs.7b12191
  8. Sengupta, U., Carballo-Pacheco, M., & Strodel, B. (2019). Automated Markov state models for molecular dynamics simulations of aggregation and self-assembly. The Journal of Chemical Physics, 150(11), 115101. https://doi.org/10.1063/1.5083915
  9. Stone, J. E., Phillips, J. C., Freddolino, P. L., Hardy, D. J., Trabuco, L. G., & Schulten, K. (2007). Accelerating molecular modeling applications with graphics processors. Journal of Computational Chemistry, 28(16), 2618–2640. https://doi.org/10.1002/jcc.20829
  10. Cruz, M. A., Frederick, T. E., Singh, S., Vithani, N., Zimmerman, M. I., Porter, J. R., Moeder, K. E., Amarasinghe, G. K., & Bowman, G. R. (2020). Discovery of a cryptic allosteric site in Ebola’s ‘undruggable’ VP35 protein using simulations and experiments. https://doi.org/10.1101/2020.02.09.940510

The Scientific Cost of Progression: CAR-T Cell Therapy

By Picasso Vasquez, Genetics and Genomics ‘20

Author’s Note: One of the main goals for my upper division UWP class was to write about a recent scientific discovery. I decided to write about CAR-T cell therapy because this summer I interned at a pharmaceutical company and worked on a project that involved using machine learning to optimize the CAR-T manufacturing process. I think readers would benefit from this article because it talks about a recent development in cancer therapy.

 

“There’s no precedent for this in cancer medicine.” Dr. Carl June is the director of the Center for Cellular Immunotherapies and the director of the Parker Institute for Cancer Immunotherapy at the University of Pennsylvania. June and his colleagues were the first to use CAR-T, which has since revolutionized personal cancer immunotherapy [1]. “They were like modern-day Lazarus cases,” said Dr. June, referencing the resurrection of Saint Lazarus in the Gospel of John and how it parallels the first two patients to receive CAR-T.  CAR-T, or chimeric antigen receptor T-cell, is a novel cancer immunotherapy that uses a person’s own immune system to fight off cancerous cells existing within their body [1].

Last summer, I had the opportunity to venture across the country from Davis, California, to Springhouse, Pennsylvania, where I worked for 12 weeks as a computational biologist. One of the projects I worked on was using machine learning models to improve upon the manufacturing process of CAR-T, with the goal of reducing the cost of the therapy. The manufacturing process begins when T-cells are collected from the hospitalized patient through a process called leukapheresis. In this process, the T-cells are frozen and shipped to the manufacturing facility, such as the one I worked at this summer, where they are then grown up in large bioreactors. On day three, the T-cells are genetically engineered to be selective towards the patient’s cancer by the addition of the chimeric antigen receptor; this process turns the T-cells into CAR-T cells [2]. For the next seven days, the bioengineered T-cells continue to grow and multiply in the bioreactor. On day 10, the T-cells are frozen and shipped back to the hospital where they are injected back into the patient. Over the 10 days prior to receiving the CAR-T cells, the patient is given chemotherapy to prepare their body for inoculation of the immunotherapy [2]. This whole process is very expensive and as Dr. June put it in his TedMed talk, “it can cost up to 150,000 dollars to make the CAR-T cells for each patient.” But the cost does not stop there; when you include the cost of treating other complications, the cost “can reach one million dollars per patient” [1].

The biggest problem with fighting cancer is that cancer cells are the result of normal cells in your body gone wrong. Because cancer cells look so similar to the normal cells, the human body’s natural immune system, which consists of B and T-cells, is unable to discern the difference between them and will be unable to fight off the cancer. The concept underlying CAR-T is to isolate a patient’s T-cells and genetically engineer them to express a protein, called a receptor, that can directly recognize and target the cancer cells [2]. The inclusion of the genetically modified receptor allows the newly created CAR-T cells to bind cancer cells by finding the conjugate antigen to the newly added receptor. Once the bond between receptor and antigen has been formed, the CAR-T cells become cytotoxic and release small molecules that signal the cancer cell to begin apoptosis [3]. Although there has always been drugs that help your body’s T-cells fight cancer, CAR-T breaks the mold by showing great efficacy and selectivity. Dr. June stated “27 out of 30 patients, the first 30 we treated, or 90 percent, had a complete remission after CAR-T cells.” He then goes on to say, “companies often declare success in a cancer trial if 15 percent of the patients had a complete response rate” [1].

As amazing as the results of CAR-T have been, this wonderful success did not happen overnight. According to Dr. June, “CAR T-cell therapies came to us after a 30-year journey, along with a road full of setbacks and surprises.” One of these setbacks is the side effects that result from the delivery of CAR-T cells. When T-cells find their corresponding antigen, in this case the receptor on the cancer cells, they begin to multiply and proliferate at very high levels. For patients who have received the therapy, this is a good sign because the increase in T-cells indicates that the therapy is working. When T-cells rapidly proliferate, they produce molecules called cytokines. Cytokines are small signaling proteins that guide other cells around them on what to do. During CAR-T, the T cells rapidly produce a cytokine called IL-6, or interleukin-6, which induces inflammation, fever, and even organ failure when produced in high amounts [3].

According to Dr. June, the first patient to receive CAR-T had “weeks to live and … already paid for his funeral.”  When he was infused with CAR-T, the patient had a high fever and fell comatose for 28 days [1]. When he awoke from his coma, he was examined by doctors and they found that his leukemia had been completely eliminated from his body, meaning that CAR-T had worked. Dr. June reported that “the CAR-T cells had attacked the leukemia … and had dissolved between 2.9 and 7.7 pounds of tumor” [1].

Although the first patients had outstanding success, the doctors still did not know what caused the fevers and organ failures. It was not until the first child to receive CAR-T went through the treatment did they discover the cause of the adverse reaction. Emily Whitehead, at six years old, was the first child to be enrolled in the CAR-T clinical trial [1]. Emily was diagnosed with acute lymphoblastic leukemia (ALL), an advanced, incurable form of leukemia. After she received the infusion of CAR-T, she experienced the same symptoms of the prior patient. “By day three, she was comatose and on life support for kidney failure, lung failure, and coma. Her fever was as high as 106 degrees Fahrenheit for three days. And we didn’t know what was causing those fevers” [1]. While running tests on Emily, the doctors found that there was an upregulation of IL-6 in her blood. Dr. June suggested that they administer Tocilizumab to combat increased IL-6 levels. After contacting Emily’s parents and the review board, Emily was given Tocilizumab and “Within hours after treatment with Tocilizumab, Emily began to improve very rapidly. Twenty-three days after her treatment, she was declared cancer-free. And today, she’s 12 years old and still in remission” [1]. Currently, two versions of CAR-T have been approved by the FDA, Yescarta and Kymriah, which treat diffuse large B-cell lymphoma (DLBCL) and acute lymphoblastic leukemia (ALL) respectively [1].       

The whole process is very stressful and time sensitive. This long manufacturing task results in the million-dollar price tag on CAR-T and is why only patients in the worst medical states can receive CAR-T [1]. However, as Dr. June states, “the cost of failure is even worse.” Despite the financial cost and difficult manufacturing process, CAR-T has elevated cancer therapy to a new level and set a new standard of care. However, there is still much work to be done. The current CAR-T drugs have only been shown to be effective against liquid based cancers such as lymphomas and non-effective against solid tumor cancers [4]. Regardless, research into improving the process of CAR-T continues to be done both at the academic level and the industrial level.

 

References:

  1. June, Carl. “A ‘living drug’ that could change the way we treat cancer.” TEDMED, Nov. 2018, ted.com/talks/carl_june_a_living_drug_that_could_change_the_way_we_treat_cancer.
  2. Tyagarajan S, Spencer T, Smith J. 2019. Optimizing CAR-T Cell Manufacturing Processes during Pivotal Clinical Trials. Mol Ther. 16: 136-144.
  3. Maude SL, Laetch TW, Buechner J, et al. 2018. Tisagenlecleucel in Children and Young Adults with B-Cell Lymphoblastic Leukemia. N Engl J Med. 378: 439-448.
  4. O’Rourke DM, Nasrallah MP, Desai A, et al. 2017. A single dose of peripherally infused EGFRvIII-directed CAR T cells mediates antigen loss and induces adaptive resistance in patients with recurrent glioblastoma. Sci Transl Med. 9: 399.