Search Results
41 results found with an empty search
- Effects of Permethrin and Permethrin Resistance on Zika Transmission
Co-authors: Huaxuan Chen, Adithya Chakravarthy, Rachel Woo First published: June 2020 Abstract The Zika virus is a mosquito-borne virus that has affected 65 countries since 2015. There are currently no cures for the virus, so at-risk populations rely solely on preventative methods such as nets and insecticide sprays, with limited effectiveness. COBWEB, a computer simulation software, was used to determine the effectiveness of an introduction of the insecticide “permethrin” in the Brazilian city of Olaria, where the Zika virus is particularly virulent. Due to its flexibility and wide array of parameters, COBWEB was also able to model the effects of permethrin resistance on Zika transmission. The result was that permethrin significantly reduced fatality rates from the Zika virus and permethrin with resistance was still comparatively better than the control. This information is helpful, as it can be applied to other areas where the Zika virus is especially virulent, and serve as a potential solution to this emerging disease. Introduction Vector-borne infectious diseases are human illnesses caused by vectors, which are carriers of diseases or medicine. Vectors do not cause disease, but they spread infection when they’re passed from one organism to another. These vector-borne diseases are typically transmitted by animal hosts, and they make up a significant portion of the world’s disease burden. Indeed, nearly half of the global population is infected with at least one type of vector-borne disease pathogen.[1] Mosquitoes are the most common disease vector, and they spread pathogens by ingesting them from an infected host during a blood meal and then injecting them into a new host.[2] The Zika virus has spread rapidly across Eastern Brazil and Mexico by mosquitoes since its first identification in Africa in 1947. As of 22 June 2016, 61 countries and territories have reported continuing Zika transmission.[2] In Brazil alone, there have been an estimated 440 000 – 1 300 000 cases of Zika[3]. Propagated by the vector Aedes aegypti, or the yellow fever mosquito, the Zika virus can result in a number of symptoms, ranging from rashes and joint pain to total body paralysis.[4] When pregnant women are infected with Zika, their fetuses often display birth defects such as microcephaly, a rare neurological condition resulting in abnormal head sizes; in Paraiba, a province in Northeastern Brazil, the health ministry released statistics revealing that 114 babies per every 10,000 live births were born with suspected microcephaly – more than 1% of all newborns.[5] Since there are no solutions to the Zika virus as of now, preventative measures such as nets are used to prevent undue exposure to the disease vector. Permethrin is a synthetic form of the naturally occurring insecticide, pyrethrum, which comes from Chrysanthemums. It is an insecticide to mosquitoes, ticks and other insects.[6] Its usage is highly effective, and it was shown through a study by the Institute of Medicine Forum on Microbial Threats that when lightweight uniforms from the military are treated until moist (approximately 4.5 oz) of permethrin (concentration 0.5%), it gives them 97.7% protection from mosquito bites.[1] Using the large-scale biological simulation software “COBWEB”, the effectiveness of the insecticide “permethrin” in reducing the spread of Zika was modelled. This simulation focuses on the city of Olaria, Brazil, where the Zika virus is especially virulent. Furthermore, the study examines the growing trend of permethrin resistance in the Aedes aegypti vector, which affects the efficacy of insecticides in preventing further spread of Zika. Three different simulations were created for comparison purposes; one was the control, one had the application of permethrin, and one had permethrin with the added factor of insecticide resistance. In comparing the three simulations, the research team was able to determine the best way of dealing with the emerging disease. Methods and Materials COBWEB, which stands for Complexity and Organized Behaviour Within Environmental Bounds, is an agent-based, Java coded software, used to study interconnected and interdependent components of complex systems in numerous fields of study. COBWEB explores how components, such as mosquito and human populations, change and adapt as different variables are manipulated. It is used to create virtual laboratories and facilitate the study of how different populations of agents are influenced by various environmental changes. This permits the assessment of growth, decline, or sustainability of the populations within their environment over time. Additionally, abiotic factors such as permethrin can be included to study the effects of its introduction in the virtual laboratory. The effect of human migration on Zika transmission rate can be simulated using COBWEB by translating population and treatment circumstances into agents and environmental factors. Based on data by the World Health Organization, the Zika virus has been prevalent in Brazil. However, not all areas of Brazil have reported evidence of the Zika virus. The city of Olaria was selected as the environment because it has the highest concentration of the virus; in the 0.79 area covering the average flight range of Aedes mosquitoes, it was found that there were 3,505 and 4,828 female mosquitoes in the MosquiTrap and aspirator, respectively, totalling 8,333.[7] By applying Olaria’s population to transmission rates, the population variance upon addition of the pesticide was observed. A. Setting up COBWEB: Assumptions and/or arbitrary figures Some arbitrary numbers and assumptions were made for a few parameters in COBWEB. In the environment tab in COBWEB, three Agent types were chosen (Figure 1). Agent 1 was assigned to represent the human population of Olaria, while Agent 2 was set to represent mosquitos; Agent 3 represents the permethrin, which is a control factor in the experiment. Additionally, it was decided that the environment would be 180 x 180 in dimensions. By increasing the environment and space for the various components to interact, more reliable data is produced. All other parameters were kept in their default state. Figure 1: This is where the desired simulation is configured. This can exemplify a number of systems such as a section of a forest, ocean, city or body part that the user wants to study. The environment is represented on a 2D grid. This represents the city of Olaria. This experiment had three simulations. The first (the control) was a simulation featuring just humans and the vector. The second (Simulation 2) featured humans, the vector, and the insecticide permethrin. The last (Simulation 3) featured humans, the insecticide-resistant vector, and permethrin. For ease of explanation, this paper will first explain the Control, Simulation 2, followed by Simulation 3 (Figures 2 – 7). Several tabs on the COBWEB software were used. The “resources”, “agents”, “food web”, and “diseases” tabs were the main factors that were manipulated for the purposes of this study. The “resources” tab was used to sustain the various populations (in this case, mosquitos, humans, and permethrin) and ensure that they had the ‘resources’ to function in the experiment. The “agents” tab was used to model the various populations and their respective roles within the environment. In order to ensure accuracy, the population of Olaria and the mosquito count were determined and inputted as the agent counts. The “food web” function was used to control the interactions and interrelationships between the agents. Finally, the “disease” function was used to study the effects of Zika on the fatality rates in Olaria and the transmission of microcephaly. The contact transmission rate, child transmission rate, and use of permethrin as a “vaccine” with a specific effectiveness were used to study the effects that permethrin has on the simulation and the effects that Zika has on future populations. The tick number at the top of the screen represents the time period in which a simulation runs. This number, which was kept constant in all three models, is relative and is representative of a sample time period. The numerical time is not the most important, as the trend over a constant period of time provides the most conclusive and useful results. However, for the purposes of this simulation, the tick number was chosen to represent days, so each tick represented one day of the year. CONTROL: Vector and Humans, with no Permethrin Figure 2A: In the “Resources” tab of COBWEB, certain resource amounts were allotted to the different agents to ensure they have enough ‘food’ to function and progress through the experiment. “Agent 1” corresponds to the human population and “Agent 2” corresponds to the mosquito population. Figure 2B: In the “Agents” tab of COBWEB, the counts of the different agents, which were determined from research, were inputted to ensure reliability of the results. The other factors were determined upon experimentation and done in ratios to depict the patterns of the agents. Figure 2C: In the “Food Web” tab of COBWEB, the interconnectedness between the two agents was depicted. For instance, “Agent 2” has a checkmark for “Agent 1” because the mosquito population affects the human population. Figure 2D: In the “Disease” tab of COBWEB, the infected fraction and the child transmission rate (as of 2015) were inputted. Since the Zika virus leads to microcephaly, a birth defect, the percentage of children who have parents infected with the virus and that acquire the condition was inputted. SIMULATION 2: Vector with Permethrin Figure 3: To reiterate, the food web function was employed to depict the interactions between the agents and the three varieties of food. Agent 3 (permethrin) “consumes” mosquitoes to signify that it kills them. Food 1 represents food both mosquitoes and humans need to survive; this mostly signifies water since it is the resource that both agents need to the greatest extent to survive. Food 2 represents food just meant for mosquitoes. “Food 3” is there to simply keep permethrin levels relatively consistent throughout the simulation’s progress; it can be seen as a source of permethrin. Figure 4: The next step was setting the agent parameters. Agent 1 represents the human population of 1893 in Olaria, Brazil[8], Agent 2 is the mosquito population of 2500, or the average of the female population size[9], and Agent 3 is the control group, or in this case the insecticide. The breed energy is higher for Agent 1 to signify that ‘more energy’ is required to reproduce, concluding that birth rates of humans are lower than that of mosquitos. Figure 5 : The initially infected fraction was approximately 7%[9], the contact transmission rate was set as the default, and the child transmission rate was 15%, as not all babies exposed to the Zika virus would be infected; the average was taken of the predicted 10-20% chance of infection.[5] For agent 2, the factors were all kept constant. This was also seen for agent 3 but with exception to the effectiveness rate of 97.7%, as the pesticide gives 97.7% protection from mosquito bites. As seen in Figure 3, agent 3 ‘eats’ agent 2, so the contact transmission rate is translated in that respect. SIMULATION 3: Insecticide-Resistant Vector with Permethrin Figure 6: All the factors are identical to that of the second simulation, except in the third where the vector’s resistance to permethrin is modelled in under the “vaccine effectiveness” tab. According to various experiments that have studied insecticide resistance of Aedes aegypti vector, the vector can show resistance ranging from 90% to 95% of its interactions with permethrin.[10] The vaccine effectiveness was averaged as 93% to represent the mean and most common resistance statistic. Results To effectively compare the three simulations, the tick count (i.e. the time step in the model) was consistently kept at 800. Since this number is quite large, it ensures an observable trend; there could be significant changes in an ecosystem over short periods of time which may skew the findings and thus, the results. CONTROL: In the control simulation, it was seen that there was a rapid growth in the mosquito population and a rapid decrease in the human population. This can be attributed to the fact that without interference, there is exponential growth in the number of mosquitos, and thus an exponential growth in the interactions between mosquitoes and humans. Figure 7A : The control simulation examined the effects of the Zika virus in Olaria without permethrin. In this simulation, the tick count number was 800, which was representative of 800 days. The graph above depicts the population of humans over time, which is steadily decreasing. Figure 7B: The graph above depicts the population of mosquitoes over time, which increases and then decreases after the population reaches 7000. SIMULATION 2: The second simulation also had a tick count of 800 for consistency. In this simulation, the human population experienced an initial decrease in population, followed by a steady increase by about 4 times the initial population (Figure 5). The mosquito population rapidly declined and then levelled out at zero after around 411 days (Figure 6). Once the mosquito population hit zero, there was an observable spike in the human population, as expected. The supply of permethrin was made to be steady and constant over time to maximize its influence on the population. Figure 8 : This graph shows the increase in human population over a time span of 800 days. Figure 9. This graph shows the decrease in the Aedes aegypti population over a span of 800 days. SIMULATION 3: The third simulation, which also had a tick count of 800 days, displayed the resistance development of the Aedes species mosquito with insecticide. Because mosquitos “learn” and eventually develop resistance to certain treatments, it is important to study their effects over time and to what degree the resistance impinges on the effectiveness of permethrin. Figure 10 : The population of Aedes aegypti vector spikes before declining to zero unlike the second simulation (Figure 5), where the mosquito population declines without spiking; this is attributable to the implementation of insecticide resistance. Figure 11: The graph above depicts the trend of the human population in Olaria when the city is under the subjection of permethrin-resistant mosquitoes. The human population rises steadily, but at a slower rate than the second simulation (Figure 5). Discussion From the study, it is evidenced that Zika is best controlled with permethrin. Although the consideration and inclusion of permethrin resistance created a deviance from the results with sole permethrin application, the results were still comparatively better than the control; the human population increased and the mosquito population decreased to a greater extent when permethrin was applied. Additionally, in comparing the graphs depicting simulation 3 and the control, it can be seen that simulation 3 yields a smaller fatality rate for humans and a greater fatality rate for mosquitos. However, in consideration of insecticide resistance, the results and trends were not as significant as those without. For instance, the population of the Aedes aegypti spiked before declining to zero. This was likely due to insecticide resistance, which allowed the initial population of mosquitoes to further propagate before declining, as expected. Also, the human population in the third simulation rose steadily but at a slower rate than in the simulation without insecticide resistance. This was likely due to insecticide resistance, which could’ve made it easier for mosquitoes to infect humans and thus, slowed population growth. Although care was taken to ensure error was minimized, there are a few inevitable errors that could have affected the data from this study. Firstly, the model does not account for certain environmental factors such as differences in temperature, humidity, and elevation, all of which could influence the reproductive and survival rates of the Aedes aegypti vector. More specifically, the rise of global temperatures as a result of climate change would undoubtedly significantly affect the vector populations. As the crisis is affecting the viability of populations, it can affect the results of the survivability and reproduction of mosquitoes. Extreme weather can also drastically influence the populations of humans and mosquitoes, making the data less reliable. These factors could be included in the next iteration of this work. Besides environmental factors being neglected from the study, numerous socio-economic factors were not factored into the study. The methods assumed that women and men were equally susceptible to Zika, because biologically speaking, their chances of contracting the virus were equal. However, in many rural areas such as Olaria, the city in question, a significant number of women work in the fields and in adverse conditions, thereby increasing their chances of exposure to the vector; about 70% of rural people in Brazil engage in agricultural employment, and female-headed households, which are becoming increasingly common, make up 27% of the poor rural population. Thus, had these environmental and socio-economic elements been factored into the simulation, the simulation results could have been different. Another potential source of error was with the averaging of the effectiveness of permethrin, which created a sample rate as opposed to a range. However, the average was the most prevalent among the tests, so it was used to find the main and most prevalent occurrence in the range of possibilities. Furthermore, in the real world, a small population of mosquitoes may survive because of insecticide resistance, and pass on the disease; in this way, the number of mosquitoes who can transmit the Zika virus could grow exponentially. However, the COBWEB software isn’t able to factor that circumstance into the simulation, so the model features all mosquitoes dying out. This additional factor would have affected the mosquito populations over time, but the model does give a guideline and approximate trend for the reduction of mosquitoes. Although there may have been discrepancies with the data, COBWEB still had the ability to produce results similar to the data provided by the WHO. In the future, factors such as the impact of climate change and the migration of people who have the disease could be studied, and comparisons can be made between Permethrin and other solutions for mitigating the spread of the Zika virus. Conclusion There is potential for more studies to be performed using the current Zika models in COBWEB. The insecticide permethrin has shown promising ability to decrease the population affected by Zika in Olaria. This can be applied to all of Brazil, and extend to other countries where Zika is virulent. This research explicates permethrin as an effective barrier to the initial interaction of humans and mosquitoes. This study suggests an alternative solution to the existing options of nets and human reproduction practices. The next step is to explore how climate change can affect these agents, and how encouraging governments to mitigate the effects of the changing climate can impact the population\’s health and well-being. Another area of future study deals with the effectiveness of a Zika vaccine, as a study has shown 17 out of 18 tests on monkeys to be effective. Overall, permethrin was shown to be effective in reducing the interactions between humans and mosquitoes, and concurrently reducing the cases of Zika. This research can be applied to other countries, such as Ecuador, where the Zika virus is also very virulent. Acknowledgements : A great thank you to Dr. Brad Bass, a status professor at the University of Toronto and Nobel Peace Prize Co-recipient, for developing the COBWEB software and for mentoring the team along the way. References : Threats, Institute. 2008. \”Summary And Assessment\”. Ncbi.Nlm.Nih.Gov . https://www.ncbi.nlm.nih.gov/books/NBK52939/ . \”Zika Situation Report\”. 2016. World Health Organization. http://www.who.int/emergencies/zika-virus/situation-report/23-june-2016/en/ . Bogoch, Isaac, Oliver Brady, Moritz Kraemer, Matthew German, Marisa Creatore, and Manisha Kulkarni. 2016. \”Anticipating The International Spread Of Zika Virus From Brazil\”. Europe PMC. http://europepmc.org/articles/pmc4873159 . \”What We Know About Zika\”. 2018. CDC. https://www.cdc.gov/zika/about/ . \”More Brazilian Babies Born With Defects\”. 2018. BBC News. http://www.bbc.co.uk/news/world-latin-america-35368401 . Bloomington, Indiana, Indiana Bloomington, IU Bloomington, and Indiana University. 2018. \”Insect Precautions – Permethrin, Deet, And Picaridin: IU Health Center\”. Healthcenter.Indiana.Edu . http://healthcenter.indiana.edu/answers/insect-precautions.shtml . Massad, Eduardo, Marcos Amaku, Francisco Countinho, Claudio Struchiner, Luis Lopez, Annelies Wilder-Smith, and Marcelo Burattini. 2018. Estimating The Size Of Aedes Aegypti Populations From Dengue Incidence Data: Implications For The Risk Of Yellow Fever Outbreaks. Ebook. Accessed November 18. https://arxiv.org/pdf/1709.01852.pdf . \”Olaria (Municipality, Brazil) – Population Statistics, Charts, Map And Location\”. 2018. Citypopulation.De . https://www.citypopulation.de/php/brazil-regiaosudeste-admin.php?adm2id=3145406 . Maciel-de-Freitas, Rafael, Alvaro Eiras, and Ricardo Lourenco-de-Oliveira. 2008. \”Calculating The Survival Rate And Estimated Population Density Of Gravid Aedes Aegypti (Diptera, Culicidae) In Rio De Janeiro, Brazil\”. http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0102-311X2008001200003 . Rodriguez, Maria, Juan Bisset, and Ditter Fernandez. 2007. \”Home\”. Bioone.Org . http://www.bioone.org/doi/abs/10.2987/5588.1 . About the Authors Huaxuan Chen is a student who is extremely passionate about global health, law, chemistry, and development. She is also an SDGs advocate who was a Canadian Youth Representative at the Commission on the Status of Women Youth Forum and PGA High Level Event on Education; she focuses mainly on climate action, gender equality, and education. She enjoys using her knowledge and skills to help others. She is now studying statistics at Duke University. Adithya Chakravarthy is currently a third-year student at the University of Toronto in the Engineering Science Program. He is very passionate about computer modeling of complex systems. Apart from science, he is also deeply involved in debating, representing Canada on the Canadian National Debate Team at the 2018 World Schools Debating Championships in Zagreb, Croatia. He is also a vocalist in the Indian Classical music tradition, having performed in many concerts across North America and India. Rachel Woo is starting her Masters in Public Health at Waterloo in Fall 2020. Her research interests include data visualization, and games for health.
- Navigating Bioethical Waters: The ethical landscape behind stem cell research
Author: Kai Sun Yiu Abstract The summer of 1996 sparked the beginnings of limitless, scientific potential. Dolly the sheep was born from her surrogate mother, after being cloned by Sir Ian Wilmut and his team from a six-year-old Finn Dorset sheep [1]. Dolly was formed through genetic material, extracted from a Finn Dorset Sheep, being placed into an enucleated egg cell [2]. An embryo was formed following a series of meiotic divisions, and 148 days (about 5 months) after being implanted into the surrogate mother’s uterus, Dolly was born [3]. However, Dolly wasn’t the first mammal to be cloned, with that title being held by two other sheep, Megan and Morag, who had been cloned a year earlier from embryonic and fetal cells [2]. However, this didn’t undermine her significance, through being the first mammal cloned from an adult cell, rather than an embryonic cell. Dolly’s existence disproved past assumptions that specialized cells could only do a certain job, with Dolly being born from a specialized mammary cell which somehow held the genetic information to create an entirely new sheep [4]. This sparked new potential for medicine and biology through the development and research of personalized stem cells, which will be explored within this article through the differences between embryonic stem cells and adult stem cells. Introduction Human cloning can be defined as the creation of a ‘genetically identical copy of a previously existing human,’ or the reproduction of cloned cells / tissue from that individual [5]. Through human cloning, we can gain stem cells from the cloned blastocyst and treat these cells to differentiate into any cells we need for medical purposes. However, this understandably has ethical complications which have made it difficult for scientists to carry out lots of stem cell research. This article focuses on the more complex ethical standpoints around stem cell research through some forms of therapeutic cloning, using SCNT (Somatic cell nuclear transfer) and iPSCs (Induced pluripotent stem cells). While SCNT uses embryonic stem cells, iPSCs utilizes the adult stem cells we possess in our bodies to repair damaged cells and tissues. However, both types of cells contain major ethical complications through their uses, allowing this article to tackle the conflict between stem cells' infinite possibilities against their downsides and ethical considerations. An insight into modern stem cell research - SCNT and iPSCs Somatic cell nuclear transfer (SCNT) refers to the process used by both reproductive and therapeutic cloning to produce a cloned embryo. SCNT was first used by Sir Ian Wilmut and his team when cloning Dolly, the sheep: The nucleus which contains the organism’s genetic material (DNA) of a somatic cell is removed The nucleus from the somatic cell is then inserted into the cytoplasm of an enucleated egg cell The egg which contains the nucleus stimulated with electric shocks to encourage mitotic division After many mitotic divisions, the cell forms a blastocyst, which divides further until it eventually forms an embryo [6] SCNT is utilized by scientists and researchers worldwide in an attempt to obtain stem cells from the cloned embryo, and use it through regenerative medical practices [6]. A common use of stem cells in regenerative medicine would include the treatment of Parkinson’s disease, where stem cells can restore the production of dopamine in the brain. The in-depth process includes gaining undifferentiated stem cells from the embryo and treating them to differentiate intro dopamine producing nerve cells to treat Parkinsons [7]. With the use of SCNT, autologous cells can be formed. Autologous cells are formed from the stem cells of the same individual, meaning the therapeutic material is cloned from the patient, allowing there to be no need for immunosuppressive treatments when differentiated cells are injected into the body [7]. This is due to the autologous cells not being foreign cells, therefore resulting in no probability of the immune system rejecting and damaging the newly introduced stem cells into their body. With stem cell’s ability to ‘treat many human afflictions, including ageing, cancer, diabetes, blindness and neurodegeneration,’ why is there such a lack of breakthrough research on stem cell therapy [8]? Induced pluripotent stem cells (iPSCs) refer to cells that have been derived and reprogrammed from adult somatic cells (normally taken from a patient bone marrow) [9]. These cells have been altered through the introduction of genes and other factors into the cells to make them pluripotent; this arguably makes them like embryonic stem cells, so they similarly carry the same ethical problems [9]. iPSCs are not only seen as unethical, but also take 3-4 weeks of careful lab work to form [10]. The process is extremely slow and inefficient and has a success rate lower than 0.1% [10]. Nonetheless, there are limitless applications for iPSCs such as regenerative medicine, disease modeling and gene therapy. Similarly to SCNT, the main advantage of iPSCs is the ability to eliminate any possibility of immune system rejection. The iPS cells are directly generated from the somatic cell of the person’s own body, so there can’t be an immune response to them, as the cells are genetically tailored precisely for the patient they are taken from [12]. The main problem of iPSC is the risk of mutation during the reprogramming of the somatic cells, which can lead to the formation of a cancerous tumor [11]. However, we still need to explore the ethical standpoints in the formation of pluripotent stem cells for research and regenerative science. The ethics - SCNT Through the process of SCNT, we can gain embryonic stem cells from a three-day old embryo. The extraction of the stem cells destroys the blastocyst, a 3-5 day old embryo which can be observed as a cluster of around 180 cells growing within a petri dish, before it fully forms into a fetus [13]. The blastocyst is used as it is at such an early stage of the formation of the fetus, so the cells have not yet differentiated so is arguably ‘not alive’ [13]. The ethical argument against the extraction of stem cells is the fact that the destruction of an embryo is arguably killing a fully developed human being. We can look at the position of Senator Sam Brownback, who saw ‘a human embryo...as a human being just like you and me. [13]’ The ethical standpoint around the destruction of embryos is so varied, with George Bush using his veto when US Congress passed a controversial bill, which permitted more funding towards research that used embryonic stem cells [14]. All these ethical considerations surrounding the formation of stem cells are difficult to forget, yet we still need to remember the immense medical and research potential they carry. Embryonic Stem Cell Research around the Globe - SCNT Map Explanation [15] Dark Brown = ‘permissive’ - Allows various embryonic stem cell research techniques such as SNCT. Light Brown = ‘flexible’ - Lots of restrictions, with embryos only being used under extreme conditions. SCNT is completely banned with human reproductive cloning not allowed. Yellow = no policy or restricted policy - Outright prohibition of embryonic stem cell research Black Dots - Leading genome sequencing research centers in the world The map above illustrates the flexibility around the globe when it comes to the use of embryos during stem cell research. Even in the very few countries which have leading facilities and institutions, there are massive restrictions on the usage of embryos. Even looking at Britain, who managed to vote on the easing of restrictions on the use of embryonic stem cells in 2001, we still have only 7 laboratories across the country [16], [17]. The various ethical considerations around the destruction of an embryo makes stem cell research difficult to legalize and even fund. However, the use of stem cells can be arguably seen as the most promising research done for regenerative medicine in the last century. Imagine the ability to cure diseases through replacing cells damaged by infection or being able to grow organs from stem cells to transplant into the thousands of patients waiting for an organ donor. What if there was a way to form somatic stem cells without the destruction of an embryo? The ethics - iPSCs Pluripotent stem cells have the ability to form all three of the basic layers in our body (ectoderm/endoderm/mesoderm), which allows them to potentially produce any cell or tissue needed [18]. There are four types of pluripotent human stem cells [19]: Embryonic stem cells Nuclear Transplant stem cells Parthenote stem cells Induced stem cells All pluripotent human stem cells, apart from Induced stem cells, require human eggs to create. This means that the use of pluripotent human stem cells is limited by ethical considerations; however, induced pluripotent stem cells are different in the way they don’t require the destruction and harm of an embryo. These iPSC were discovered over ten years ago by Shinya Yamanaka. The Nobel Prize winner managed to revolutionize biological research by developing a technique to convert adult mature cells into stem cells using the four key genes OCT3/4, SOX2, KLF4, MYC, which are now known as the ‘Yamanaka factors [20].’ iPSCs as a research area has been greatly explored by thousands of researchers around the world, due to the production of the cells being non-controversial in their ability to be derived straight from adult cells rather than embryonic cells. There have been numerous applications of iPSCs in therapeutic medicine. In 2014, RIKEN (The largest scientific research institution in Japan), treated the first patient with iPSC derived retinal sheets, which were able to help with visual function [21]. 2 years later, Cynata Therapeutics (A biotech company), produced iPSC derived product for the treatment of GvHD (Graft versus host disease) [21]. GvHD is a life-threatening disease which can occur when there are complications during stem cell and bone marrow transplants [21]. When the grafted cells are transplanted into the patient, they begin to produce antibodies which interact with the host antigens. This triggers an immune response which may result in an inflammatory cascade, which can cause irreversible organ dysfunction and even death [22]. The unlimited medical possibilities iPSCs unlock, paired with the lack of ethical problems they face, makes IPSCs the perfect way to bring stem cell therapy to the masses. However, it is not so simple, with the main issue of iPSCs is the need for retrovirus to form these stem cells [23]. The retroviruses used in forming the iPSCs can insert their DNA anywhere in the human genome and trigger cancerous gene expression when transplanted into the patient [23]. Furthermore, the success rate of reprogramming somatic cells into iPSCs is around 0.1%. Not only this, but iPSCs have a strange tendency to not always differentiate, making them much less reliable and successful than embryonic stem cells [23]. Nonetheless, research into iPSCs has developed rapidly over the past few years, with scientists and researchers slowly making stem cell therapy using iPSCs available to the public. The Pros and Cons of Embryonic Stem Cells and Induced Pluripotent Stem Cells Embryonic Stem Cells Induced Pluripotent Stem Cells Pros Can maintain and grow for 1 year or more in culture Established protocols for maintenance in culture ESCs are pluripotent cells that can generate most cell types By studying ESCs, more can be learned about the process of development Abundant somatic cells of donor can be used Issues of histocompatibility with donor/recipient transplants can be avoided Very useful for drug development and developmental studies Information learned from the “reprogramming” process may be transferable for in vivo therapies to reprogram damaged or diseased cells/tissues Cons Process to generate ESC lines is inefficient Unsure whether they would be rejected if used in transplants. Therapies using ESC avenues are largely new and much more research and testing is needed If used directly from the ESC undifferentiated culture prep for tissue transplants, they can cause tumors (teratomas) or cancer development Methods for ensuring reproducibility and maintenance, as differentiated tissues are not certain. Viruses are currently used to introduce embryonic genes and has been shown to cause cancers in mouse studies Ethical Concerns To acquire the inner cell, mass the embryo is destroyed Risk to female donors being consented iPS cells have the potential to become embryos if exposed to the right conditions A comparative table between Embryonic Stem cells and Induced Pluripotent Stem Cells [26] Stem Cell Therapy Today Although no one has been cured of Parkinson's disease (PD) yet, the research from institutions around the world have shown significant development in recent years with ‘experimental treatment.’ On the 13th of February 2023, embryonic stem cells (most likely obtained through SCNT) derived healthy, dopamine producing nerve cells, which were transplanted into a patient with Parkinson’s at Skåne University Hospital, Sweden [24]. This marks an important milestone for all stem cell research, with the transplantation of the nerve cells being performed perfectly, shown by magnetic resonance imaging (MRI) [24]. The STEM - PD trial at Lund University (The first in-human trial to test the safety of stem cells for Parkinsons) is continuing to replace lost dopamine cells with healthy ones, manufactured from embryonic stem cells [24]. Parkinson's disease slowly affects the nervous system, due to the loss of nerve cells in the substantia nigra in the brain. Nerve cells are crucial for the production of dopamine, so the implantation of nerve cells helps regulate brain activity and function by secreting regular levels of dopamine. STEM - PD aims to move from their first human trial all the way to global treatment around the world [24]. This latest success in the use of embryonic stem cells further pushes researchers around the world, with stem cells soon to unlock cures for multiple diseases, in addition to aiding with the worldwide shortage of organs. In Early 2023, researchers were able to differentiate neurons from induced pluripotent stem cells (iPSCs) [25]. The usage of iPSCs made it an arduous task, with the team needing to firstly differentiate the iPSCs into motor neurons, before placing them into coatings of synthetic nanofibers containing rapidly moving dancing molecules [25]. These mature neurons help aid the body’s nervous system, through sending rapid electrical signals around our body through tiny structures known as nerves. Within the near future, researchers believe that these mature neurons can be transplanted into those suffering with spinal cord injuries as well as neurodegenerative diseases (ALS, Parkinsons, Alzheimer's, Sclerosis) [25]. This new advancement in the usage of iPSCs allows scientists to research ethically sound ways of using stem cells for the treatment of all diseases. Conclusion Use of stem cells in repairing damaged cells/tissues, research into understanding diseases and testing for new drugs gives them incredible value in research and regenerative medicine. As the research for regenerative medicine improves, the success rate in the use of stem cells will gradually grow, which will hopefully loosen the tight legal grasp over the use of ESC (embryonic stem cells) and iPSCs due to their ethical problems (As seen in the table above), in either destroying and embryo or theoretically being from an embryo in iPSCs as they can be derived into an embryo. Researcher and scientists should strive to refine existing stem cell formation techniques, and through the difficult legal ties, battle their way to the final aim of having stem cells with the ability to differentiate into any cell needed to cure any disease, form any organs to be used in transplants, and research into greater depth the difficulty in battling certain diseases. References [1] Weintraub, Karen. “20 Years after Dolly the Sheep Led the Way-Where Is Cloning Now?” Scientific American, July 1, 2016. https://www.scientificamerican.com/article/20-years-after-dolly-the-sheep-led-the-way-where-is-cloning-now/ [2] “The Life of Dolly.” Dolly the Sheep. Accessed November 26, 2023. https://www.ed.ac.uk/roslin/about/dolly/facts/life-of-dolly [3] “Dolly and Polly.” Encyclopedia Britannica. Accessed November 26, 2023. https://www.britannica.com/biography/Ian-Wilmut/Dolly-and-Polly [4] Natural World 5 min read. “Dolly the Sheep.” National Museums Scotland. Accessed November 26, 2023. https://www.nms.ac.uk/explore-our-collections/stories/natural-sciences/dolly-the-sheep/ [5] “Human Cloning.” ScienceDaily. Accessed November 26, 2023. https://www.sciencedaily.com/terms/human_cloning.htm [6] “Somatic Cell Nuclear Transfer.” Somatic_cell_nuclear_transfer. Accessed November 26, 2023. https://www.bionity.com/en/encyclopedia/Somatic_cell_nuclear_transfer.html [7] “Therapeutic Cloning.” Therapeutic Cloning - an overview | ScienceDirect Topics. Accessed November 26, 2023. https://www.sciencedirect.com/topics/engineering/therapeutic-cloning [8] Watt, Fiona M, and Ryan R Driskell. “The Therapeutic Potential of Stem Cells.” Philosophical transactions of the Royal Society of London. Series B, Biological sciences, January 12, 2010. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2842697/#:~:text=Almost%20every%20day%20there%20are,%2C%20diabetes%2C%20blindness%20and%20neurodegeneration [9] Ye, Lei, Cory Swingen, and Jianyi Zhang. “Induced Pluripotent Stem Cells and Their Potential for Basic and Clinical Sciences.” Current cardiology reviews, February 1, 2013. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3584308/ [10] Ghaedi, Mahboobe, and Laura E Niklason. “Human Pluripotent Stem Cells (Ipsc) Generation, Culture, and Differentiation to Lung Progenitor Cells.” Methods in molecular biology (Clifton, N.J.), 2019. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5976544/#:~:text=To%20generate%20the%20iPSCs%2C%20each,low%20as%00.01–0.1%20%25 [11] “Why IPSC Research Is so Important (and so Tough).” Tecan. Accessed November 27, 2023. https://www.tecan.com/blog/why-ipsc-research-is-soimportant#:~:text=For%20example%2C%20iPSC%20are%20used,specific%20cell%20types%20and%20tissues [12] Lowden, Olivia. Advantages and disadvantages of induced pluripotent stem cells, November 10, 2023. https://blog.bccresearch.com/advantages-and-disadvantages-of-induced-pluripotent-stem-cells [13] “Examining the Ethics of Embryonic Stem Cell Research.” Harvard Stem Cell Institute (HSCI). Accessed November 28, 2023. https://hsci.harvard.edu/examining-ethics-embryonic-stem-cell-research [14] Lenzer, Jeanne. “Bush Says He Will Veto Stem Cell Funding, despite Vote in Favour in Congress.” BMJ (Clinical research ed.), June 16, 2007. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1892464/ [15] Stem cell policy: World stem cell map. (Image: William Hoffman, MBBNet) Accessed November 29, 2023. https://www.mbbnet.umn.edu/scmap.html [16] Lachmann, P. “Stem Cell Research--Why Is It Regarded as a Threat? An Investigation of the Economic and Ethical Arguments Made against Research with Human Embryonic Stem Cells.” EMBO reports, March 2001. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1083849/ [17] Stem cell processing, Cellular and Molecular Therapies - NHS Blood and Transplant. Accessed November 29, 2023. https://www.nhsbt.nhs.uk/cellular-and-molecular-therapies/products-and-services/stem-cell-processing/ [18] Stem Cell Program, Pluripotent Stem Cell Research. Accessed November 29, 2023. https://www.childrenshospital.org/research/programs/stem-cell-program-research/stem-cell-research/pluripotent-stem-cell-research#:~:text=What%20makes%20pluripotent%20stem%20cells,body%20needs%20to%20repair%20itself [19] Beford Research Foundation. ‘What are Induced Pluripotent Stem Cells? (iPS Cells)’ Apr 23, 2011 https://www.youtube.com/watch?v=i-QSurQWZo0 [20] Dana G. ‘Reflecting on the Discovery of the Decade: Induced Pluripotent Stem Cells.’ Accessed November 29, 2023. https://gladstone.org/news/reflecting-discovery-decade-induced-pluripotent-stem-cells [21] Cade Hildreth. Induced Pluripotent Stem Cell (iPS Cell) Applications in 2023. Accessed December 3, 2023. https://bioinformant.com/ips-cell-applications/ [22] Clopton, David. n.d. “Graft versus Host Disease | Radiology Reference Article | Radiopaedia.org .” Radiopaedia. https://radiopaedia.org/articles/graft-versus-host-disease?lang=gb [23] Dr. Surat, and Dr. Tomislav Mestrovic. Induced Pluripotent Stem (iPS) Cells: Discovery, Advantages and CRISPR Cas9 Gene Editing. Accessed December 3, 2023. https://www.news-medical.net/life-sciences/Induced-Pluripotent-Stem-(iPS)-Cells-Discovery-Advantages-and-CRISPR-Cas9-Gene-Editing.aspx#:~:text=Muhammad%20Khan%20%7C%20TEDxBrentwoodCollegeSchool-,Disadvantages,trigger%20cancer-causing%20gene%20expression [24] First patient receives milestone stem cell-based transplant for Parkinson’s Disease. Feb 28, 2023 https://www.lunduniversity.lu.se/article/first-patient-receives-milestone-stem-cell-based-transplant-parkinsons-disease#:~:text=First%20patient%20receives%20milestone%20stem%20cell-based%20transplant%20for%20Parkinson%27s%20Disease,-Published%2028%20February&text=On%2013th%20of%20February%2C%20a,at%20Skåne%20University%20Hospital%2C%20Sweden . [25] Amanda Morris. Mature ‘lab grown’ neurons hold promise for neurodegenerative disease. Jan 12, 2023 https://news.northwestern.edu/stories/2023/01/mature-lab-grown-neurons-hold-promise-for-neurodegenerative-disease/ [26] University of Nebraska Medical Center. STEM CELLS. Accessed December 5, 2023 https://www.unmc.edu/stemcells/stemcells/unmc.html [27] Michael S. Pepper, C Gouveia. Legislation governing pluripotent stem cells in South Africa. (Image: Melodie Labuschaigne), Sept 2015 https://www.researchgate.net/figure/Somatic-cell-nuclear-transfer-SCNT-SCNT-involves-the-removal-of-the-chromosomes_fig3_285619276
- Overview of Brain Imaging Techniques
Author : Steffi Kim Brain imaging techniques allow neurologists and researchers alike to measure brain activity, diagnose medical and psychiatric conditions, and gain insight into the brain’s interconnected webs and complex structures. While psychologists once had to rely entirely on observable behavior and could only guess at the workings of the brain, new technologies can reveal the brain’s structure and function in astonishing detail. Each brain scanning technology has specific purposes and limitations. As such, researchers may choose a specific technique or combination of techniques depending on the circumstances and what is being measured. EEG (Electroencephalogram) The EEG was first developed in 1924 by Hans Berger, a German psychiatrist, making it one of the oldest brain imaging technologies. EEG measures the frequency and location of brain waves through a series of small electrodes placed across the scalp. Every time neurons in the brain fire, an electrical field is produced. By measuring electrical activity, the electrodes can effectively assess neuronal firing. The electrodes are commonly attached to a cap and are wired to a monitor that graphs the frequency of brain waves. Different brain waves—gamma, beta, alpha, theta, and delta—signal varying levels of brain alertness and functioning. Abnormal brain wave patterns could be the result of a neurological condition, and EEG is commonly used to test for epilepsy and sleep disorders. fMRI (Functional Magnetic Resonance Imaging) fMRI involves tracking the movement of blood and oxygen through the brain to analyze functioning and structure. Highly active brain regions require more oxygen, and greater blood flow in an area is associated with increased brain activity. To perform an fMRI scan, patients are placed into the tunnel of an MRI scanner, which utilizes strong magnetic fields and radio waves. The magnetic field of the scanner alters the positioning of hydrogen protons in the water of the blood, causing the hydrogen atoms to rotate and release energy. The scanner measures the magnetic signals produced by hydrogen to develop detailed images of the brain. fMRI is widely used in studies, where participants may perform tasks while in the MRI scanner to allow researchers to observe which brain regions are involved. PET (Positron Emission Tomography) PET scans track blood flow in the brain via a radioactive tracer substance. The radioactive substance, which is either injected, swallowed, or inhaled, binds to glucose in the blood. The blood then travels up to the brain and through various regions, emitting gamma waves from positrons in the radioactive tracer interacting with electrons. Neurons utilize glucose as their main energy source, and areas with more glucose indicate higher brain activity. The radioactive substance appears on images in bright, multicolored patches, where red symbolizes the highest level of glucose metabolism and activity, while purple and black represent low levels of function. PET scans, which reveal how well the brain is working on a cellular level, are often used to measure Alzheimer's and seizures, as well as medical conditions. CT (Computerized Tomography) CT scans splice together multiple X-rays to create cross-sectional images or 3D models of the brain. CT scans reveal more information about the brain tissues and skull than typical X-rays and are ideal for assessing fractures, brain injuries, and damage after strokes. References: Bosquez, Taryn. “Neuroimaging: Three Important Brain Imaging Techniques.” ScIU, February 5, 2022. https://blogs.iu.edu/sciu/2022/02/05/three-brain-imaging-techniques/. “Brain Imaging: What Are the Different Types?” BrainLine, April 22, 2011. https://www.brainline.org/slideshow/brain-imaging-what-are-different-types. Genetic Science Learning Center. "Brain Imaging Technologies." Learn.Genetics. June 30, 2015. https://learn.genetics.utah.edu/content/neuroscience/brainimaging/. Lovering, Nancy. “Types of Brain Imaging Techniques.” Psych Central, October 22, 2021. https://psychcentral.com/lib/types-of-brain-imaging-techniques. “Scanning the Brain.” American Psychological Association, August 1, 2014. https://www.apa.org/topics/neuropsychology/brain-form-function.
- How Music Affects the Brain
Author : Steffi Kim It’s a long-established fact that listening to music can affect your feelings and mind. However, only relatively recently has neuroscience allowed these shifts in brain activity to actually be recorded. Music is a universally appreciated art form that, across a vast array of genres, can encompass almost every human experience or feeling. Music is what we turn to for comfort when we feel down; for concentration when studying; or for adrenaline before a sports game. Music has been found to decrease symptoms of depression, anxiety levels, and blood pressure. Moreover, listening to music can boost the immune system as well as improve alertness, memory, and cognitive functioning . How the brain perceives music Essentially, music is a combination of notes, rhythms, melodies, and vocals that the brain absorbs and processes as a song. Sound waves in the air create vibrations in the eardrum that then become electrical signals. The auditory nerve, spanning from the inner ear to the brain stem, encodes the details of the sound and sends it to the temporal lobe to process. The right hemisphere of the temporal lobe interprets the instrumentals and music, while the left hemisphere decodes the language and lyrics. Interestingly, the way the brain processes music is not universal: brain scans have revealed that professional musicians “hear” music differently than others do. Rather than engage the temporal lobe, listening to music causes professional musicians to employ the visual or occipital lobe, indicating that they may be visualizing sheet music or notes while listening. Areas of the brain Music provides a rich, in-the-moment sensory experience that activates almost the entire brain, allowing the neural pathways to be exercised and strengthened. The hypothalamus and autonomic nervous system (ANS) are affected by music, leading to changes in heart rate, sleep, mood, breathing, and other unconscious behaviors. The limbic system, which monitors reward and motivation, becomes active, and dopamine and serotonin are released, boosting happiness and enhancing focus. While music is generally beneficial, it should be noted that listening to intensely sad or angry music can lead to negative emotions and cause the brain to release cortisol. The amygdala, which governs emotions, works in conjunction with the hippocampus to recall emotional memories tied to the music. Listening to music also affects the motor system, which makes dancing or tapping your foot to the beat come naturally. The orbitofrontal cortex (OFC), located in the frontal lobe and implicated in decision-making, displays hyperactivity when listening to music, similar to the hyperactivity the OFC typically displays in people with OCD. An explanation for this is that the tension, anticipation, and resolution of music require high focus and concentration. Furthermore, interpreting lyrics and producing speech engages Broca’s Area and Wernicke’s Area, and, as a result, music may improve language processing. Memory Have you ever recalled one line of an old song, and suddenly remembered the rest of the lyrics? Songs that we listen to repeatedly can become ingrained in our implicit, or unconscious memory. People often listen to the same set of songs from their adolescence and early adulthood, however, listening to unfamiliar music is beneficial because the brain has to adapt and process new sounds. Listening to music from a past time period can help you recall old memories in vivid sensory detail. Remarkably, even though neurodegenerative diseases erode explicit memory, many patients with Alzheimer’s can still recall familiar music and lyrics. Due to muscle memory stored in the cerebellum, patients with neurodegenerative diseases may even remember how to play instruments like the piano. Attaching rhythms and melody to phrases can help the brain recall them more easily—a technique heavily used in commercials. Some studies have found that listening to Mozart while working can enhance spatial processing. Music can also boost memory by triggering neurogenesis, or the formation of neurons, in the hippocampus. Music as therapy and treatment Music has the power to evoke specific emotions and can be used in therapy, specifically in regard to memory and neurodegenerative diseases. By affecting the putamen, rhythms in music can temporarily reduce symptoms of Parkinson’s disease and help patients with coordination and walking. Furthermore, research has suggested that people with epilepsy can minimize the occurrence of seizures by listening to Mozart’s Sonata for Two Pianos in D Major. Overall, more research is needed to fully understand how music affects the brain and to explore possible medical implications. References: Eck, Allison. “How Music Resonates in the Brain.” Harvard Medicine Magazine, April 23, 2024. https://magazine.hms.harvard.edu/articles/how-music-resonates-brain. Heshmat, Shahram. “Why Does Music Evoke Memories?” Psychology Today, September 14, 2021. https://www.psychologytoday.com/us/blog/science-choice/202109/why-does-music-evoke-memories. “Keep Your Brain Young with Music.” Johns Hopkins Medicine, April 13, 2022. https://www.hopkinsmedicine.org/health/wellness-and-prevention/keep-your-brain-young-with-music. Magsamen, Susan. “How Music Affects Your Brain.” Time, April 28, 2023. https://time.com/6275519/how-music-affects-your-brain/. Shepherd, Becks. “How Does Music Affect Your Brain?” LiveScience, December 15, 2022. https://www.livescience.com/how-does-music-affect-your-brain. “Your Brain on Music.” Pegasus Magazine, October 30, 2019. https://www.ucf.edu/pegasus/your-brain-on-music/.
- Time Warp Whirlwind
Author: Maia Zaman Arpita Time travel, once thought to be the realm of science fiction has long fascinated scientists and dreamers alike. From H.G. Wells' classic "The Time Machine" to the blockbuster film "Back to the Future," the idea of traveling through time has captured the human imagination. But is time travel really possible? According to the theory of relativity, developed by Albert Einstein in the early 20th century, time is not a constant but is instead relative to an observer. This means that time can move faster or slower for different observers, depending on their speed and the strength of gravity. This has profound implications for time travel, as traveling at high speeds or near massive objects can cause time to pass more slowly or quickly. But does this mean that time travel is possible? Could we one day journey to the past or the future? The answer may lie in a peculiar solution to the theory of relativity: wormholes. Einstein's theory of relativity, specifically the special theory of relativity, changed the way we think about time. According to the theory, time is not a constant but is instead relative to the observer's frame of reference. This means that time can move at different rates for different observers, depending on their speed and the strength of gravity. This strange effect is called "time dilation," and it can have dramatic consequences for time travelers. For example, if a person travels at near the speed of light, time will move more slowly for them than for someone on Earth. This means that when they return to Earth, they will have aged less than the people who stayed behind. This phenomenon has been observed in experiments with atomic clocks, and it is a fundamental part of the way the universe works. But how can we use this knowledge to travel through time? That's where wormholes come in. Wormholes, also known as Einstein-Rosen bridges, are hypothetical tunnels through spacetime that connect two distant points. They are predicted by the mathematics of general relativity, which was developed by Einstein after he formulated the special theory of relativity. In theory, wormholes could allow us to travel vast distances in a short amount of time, by taking a shortcut through space-time. But could they also be used for time travel? A proposed method for time travel using wormholes involves creating a "time machine loop," which could be used to travel to both the future and the past. This method involves creating two wormholes, one in the present and one in the future. The wormhole in the future is then sent back through time to the present, creating a closed loop. By entering the wormhole in the future, a person could emerge back in the present at a later time, traveling to the future. Additionally, the wormhole loop can be further manipulated to allow for travel to the past. By sending the wormhole in the future backward in time, a new closed loop is created that connects the present to the past, allowing a person to travel from the present to the past. This manipulation of the time loop creates a passage between the present and the past, allowing for travel back and forth between the two points in time. While the concept of time travel using wormholes is fascinating, there are several significant challenges and limitations. First, it is not yet known whether wormholes exist in nature. While they are mathematically possible, we have yet to observe any evidence of their existence. Even if they do exist, they are likely to be extremely unstable, making them difficult or impossible to use for travel. Even if we were able to create a wormhole, we would need to find a way to keep it open long enough for a person to travel through it. Another potential issue is the enormous amounts of energy that would be required to create and sustain a wormhole. Moreover, there are a number of theoretical and philosophical questions about the possibility of time travel. For example, what would happen if you were to change something in the past? Could you cause a paradox or would the universe find a way to correct itself? While time travel remains a fascinating subject for scientists, philosophers, and science fiction enthusiasts, it is clear that there are still many challenges and unknowns. Despite these limitations, the pursuit of time travel is an exciting and intellectually stimulating endeavor. It forces us to grapple with fundamental questions about the nature of time, the universe, and our own place within it. As we continue to explore and experiment with the fundamental laws of physics, we may one day discover a way to unlock the secrets of time travel.
- Nuclear Visionary: Dr. Wazed Miah
Author: Maia Zaman Arpita Introduction In a country known for its lush green forests and rolling rivers, Dr. M. A. Wazed Miah’s expertise in nuclear science and technology has been an illuminating force. His contributions to nuclear energy and peaceful applications of nuclear technology have not only changed the landscape of Bangladesh, but also shone a light on the country’s scientific potential. His work has been pivotial in scientific advancement and in fostering development on Bangladesh's nuclear energy program with a strong emphasis on the peaceful applications of nuclear technology. Dr. Miah’s career is characterized by his commitment to education, research, and international cooperation making him a key figure in the promotion of nuclear science in the developing world. Early life and education Dr. M. A. Wazed Miah who was born on February 7, 1942; in the Bogura district of Bangladesh displayed a keen interest in science at a very young age. His academic journey began at the University of Dhaka, where he pursued a Bachelor of Science degree in Physics. Demonstrating exceptional aptitude, he continued his education at the same institution, obtaining a Master’s degree in Physics. Fueled by a desire to deepen his understanding and to enhance his skills in nuclear science and technology, Dr. Miah moved to the United Kingdom, where he earned a Ph.D. in Nuclear Physics from the University of London. His doctoral research focused on nuclear interactions which served as a cornerstone for his future contributions to the field. Contributions to Nuclear Science Dr. Wazed Miah's career is marked by extensive research in nuclear physics and a steadfast commitment to the peaceful utilization of nuclear technology. In 1973, he played a crucial role in the establishment of the Bangladesh Atomic Energy Commission (BAEC), serving as its first chairman. Under his leadership, the commission focused on harnessing nuclear energy for peaceful applications, including advancements in agriculture, medicine, and energy production. His expertise in nuclear reactor technology and radiation protection has been instrumental in ensuring the safe implementation of nuclear energy initiatives in Bangladesh. Dr. Miah was actively involved in the development and commissioning of the country's first research reactor, which has been pivotal for scientific research and training. Additionally, he has contributed significantly to the establishment of nuclear safety protocols and regulatory frameworks, ensuring that the country's nuclear activities align with international safety standards. Dr. Miah has also been a prolific contributor to scientific literature, authoring numerous research papers and articles that address various aspects of nuclear science. His work has been published in reputable scientific journals, contributing to the global discourse on nuclear technology and its applications. Advocacy for peaceful nuclear technology Throughout his career, Dr. Wazed Miah has been an ardent advocate for the peaceful applications of nuclear energy. He consistently emphasized on the critical role of nuclear technology in addressing the energy demands of developing nations, particularly in the context of sustainable development. His efforts have been directed toward promoting nuclear science education and research, thereby nurturing a new generation of scientists and engineers in Bangladesh. Dr. Miah has been involved in various educational initiatives, collaborating with universities and research institutions to enhance nuclear science curricula and to provide training opportunities for students and professionals. His commitment to education extends beyond the classroom; he has organized workshops, seminars, and conferences aimed at raising awareness about the benefits and safety of nuclear technology. Moreover, Dr. Miah has been a strong proponent of international collaboration in nuclear research and safety. He has represented Bangladesh in numerous international forums, including the International Atomic Energy Agency (IAEA), where he participated in discussions on nuclear safety, security, and the peaceful use of nuclear energy. His engagement in these forums has facilitated knowledge exchange and has positioned Bangladesh as an emerging player in the global nuclear community. Conclusion Dr. M. A. Wazed Miah is a distinguished figure in the field of nuclear science, whose dedication and contributions have significantly influenced the trajectory of nuclear technology in Bangladesh. His advocacy for the peaceful use of nuclear energy continues to guide and inspire future generations of scientists and policymakers. As Bangladesh progresses towards a sustainable energy future, the principles and vision articulated by Dr. Wazed Miah remain essential in shaping the responsible development of nuclear technology. His life and work exemplify the potential of science to contribute to societal advancement, making him a vital figure in the ongoing discourse on nuclear energy and its role in global development.
- Naegleria fowleri: Pathogenesis, Diagnosis, and Treatment Strategies
Authors: Aditya Tyagi, Rushdanya Bushra, and Deniz Tiryakioglu Abstract: Known as the "brain-eating amoeba," Naegleria fowleri is a thermophilic, free-living protozoan that causes primary amoebic meningoencephalitis (PAM), an infection of the central nervous system that is uncommon but nearly always fatal. This review looks at the pathophysiology, difficulties with diagnosis, and approaches to treating infections caused by N. fowleri. The pathogenesis starts when the amoeba's trophozoite form enters the nasal passages, usually as a result of exposure to freshwater. It then migrates down the olfactory nerve and infiltrates the brain, resulting in extensive and fast neuroinflammation as well as necrosis. The early symptoms of PAM are nonspecific and mirror bacterial meningitis, making clinical identification difficult and raising suspicion in endemic locations. Cerebrospinal fluid (CSF) examination is necessary for diagnostic confirmation; the presence of motile trophozoites or positive PCR results indicate infection. There are few and frequently ineffective treatment options available; however, a combination therapy using amphotericin B, rifampin, and miltefosine has demonstrated some success. Aggressive treatment and early diagnosis are still essential for patient survival. In order to tackle this fatal infection, this systematic review emphasizes the need for increased awareness, quick diagnosis methods, and innovative therapy options. Diagnosis: The primary methods for diagnosing Naegleria fowleri are imaging scans, laboratory testing, and clinical suspicion. A lumbar puncture is usually done to acquire cerebrospinal fluid (CSF) for investigation when PAM is suspected. Low glucose, high protein levels, and an increased white blood cell count (pleocytosis) are common findings in the CSF of patients infected with Naegleria fowleri. These findings are also suggestive of bacterial meningitis. However, to identify the amoeba itself, particular diagnostic procedures are required. Detection of Naegleria fowleri in Patients: Several laboratory methods can be used to identify Naegleria fowleri in patients. Motile amoebae can be directly detected by direct microscopic analysis of the CSF under a microscope, although this technique needs a trained technician and is not always conclusive. A variety of staining methods, including trichrome or Giemsa-Wright, can improve the amoebae's appearance in CSF samples. Polymerase chain reaction (PCR) and immunofluorescence assays are examples of more sophisticated techniques. Naegleria fowleri DNA can be found in CSF samples using PCR, a very sensitive method that offers a conclusive diagnosis. Through the use of fluorescent dye-tagged antibodies that bind to Naegleria fowleri specifically, immunofluorescence assays enable the detection of the organism under a fluorescence microscope. Even though these techniques are more precise, not all healthcare facilities may be able to use them because they need certain tools and skilled workers. Imaging tests like magnetic resonance imaging (MRI) and computed tomography (CT) scans can aid in the diagnosis in addition to CSF analysis by identifying brain abnormalities that are consistent with PAM. These could include enlargement of the brain and other indicators of severe inflammation. Initiating proper treatment for Naegleria fowleri requires early and precise detection. This treatment may involve a combination of antimicrobial medicines and supportive care measures. The prognosis for PAM is still poor despite these efforts, which emphasizes the significance of prompt identification and treatment. Treatment: Due to the rarity of human infections, case reports and in vitro research mostly inform treatment choices for N. fowleri infections. Based on this research, amphotericin B is generally regarded as the most effective medicine, despite the paucity of clinical trials evaluating therapeutic efficacy. Furthermore, case reports have included the use of additional anti-infectives, though their effectiveness may vary, such as azithromycin, rifampin, miltefosine, miconazole, and fluconazole. Other agents, such as hygromycin, roxithromycin, clarithromycin, erythromycin, roxithromycin, and zeocin, have also been examined in vitro and/or in vivo. Nevertheless, more research is needed to fully understand their clinical usefulness and effectiveness in treating N. fowleri infections. Amphotericin B In a recent study, a total of 381 global cases of PAM were identified from 1937 through 2018, and only seven survivors were reported. (Debnath 2) From 1937 to 2013, there were 142 reported cases of PAM in the United States. Treatment data for 70 of the 142 (49%) patients were available. Among these patients, 36 (51%) received treatment for PAM, and 3 (8%) of them survived. All 36 patients treated for PAM were administered amphotericin B, with 7 (19%) receiving only intravenous (IV) therapy, 5 (14%) receiving only intrathecal (IT) therapy, and 24 (67%) receiving a combination of IV and IT therapy. Additionally, 7 patients received a non-deoxycholate amphotericin B formulation, which includes liposomal and lipid complex formulations. The original US survivor and the 2013 survivors received deoxycholate amphotericin B. AmBisome, the liposomal formulation of amphotericin B, was approved for use in the United States in 1997 by the US Food and Drug Administration. Patients treated with amphotericin B for Naegleria after 1997 were more likely to receive the liposomal preparation, as fewer renal adverse effects have been reported with this formulation. However, it has been found that liposomal amphotericin B is less effective against N. fowleri in vitro and in a mouse model compared to deoxycholate amphotericin B. It's important to note that these findings came from two different studies, and the deoxycholate formulation was given at a higher dose than what is typically used in patients. Seven patients with N. fowleri infection received a non-deoxycholate formulation. Given the extremely poor prognosis of PAM caused by N. fowleri, healthcare providers might want to consider using deoxycholate amphotericin B instead of the liposomal or lipid complex formulation. However, if deoxycholate amphotericin B is not immediately available, treatment should be initiated with a non-deoxycholate formulation to facilitate prompt treatment of the patient. Treatment history of seven confirmed survivors showed that all survivors received amphotericin B, either intravenously or both intravenously and intrathecally. Only one survivor received amphotericin B alone; the rest of the survivors were treated with a combination of drugs. (Debnath 2) Unfortunately, amphotericin B alone is not universally effective. It was administered to nearly three-quarters (71%) of PAM patients. The formulation of amphotericin B (e.g., deoxycholate vs. lipid or liposomal) may play a role in treatment effectiveness. Conventional deoxycholate formulations have shown greater efficacy in vitro and in mouse models, despite greater adverse effects. Additionally, amphotericin B and the other drugs included in survivors’ regimens may not be available in all settings. Multiple case reports specifically mentioned unavailability of amphotericin B at the time of patient presentation and diagnosis.(Gharpure et al. 7) The repurposing of antifungal drugs in the drug discovery of Primary Amebic Meningoencephalitis (PAM) has a historical context. Amphotericin B, an antifungal drug, was used in all confirmed survivors and administered to about three-quarters of PAM patients. However, it was not universally effective. (Debnath 5)As a result, researchers investigated the effect of other antifungal drugs to identify a more active and less toxic alternative to amphotericin B. A water-soluble polyene macrolide called corifungin, which is in the same class as amphotericin B, was tested against N. fowleri trophozoites. The compound was found to be twice as potent as amphotericin B. Although in vitro studies suggested that corifungin may have a similar mechanism of action to amphotericin B in N. fowleri, the increased solubility of corifungin may have contributed to its better tolerability and pharmacokinetic distribution in animals. (Debnath 4) Silver nanoparticles were conjugated with amphotericin B, resulting in enhanced amebicidal activity. Additional studies are needed to confirm whether this increased activity observed in vitro translates to improved drug delivery and efficacy in animal models. (Debnath 4) Miltefosine In 1980, the drug miltefosine was first used as an experimental treatment for breast cancer. It has been observed that the Naegleria parasite can cause a strong inflammatory response, resulting in tissue damage and bleeding. By 2013, the CDC had reported 26 cases where miltefosine was used. In a laboratory study, miltefosine was compared with amphotericin B over the course of a month. The study found that the minimum amount of the drug needed to inhibit the growth of the parasite was 0.25 ug/ml for miltefosine and 0.78 ug/ml for amphotericin B. The survival rates for patients treated with miltefosine were 55%, compared to 40% for those treated with amphotericin B. The optimal dosage, frequency, and treatment duration for miltefosine are not yet fully understood, but a common recommendation is to not exceed 50 mg tablets and a maximum daily dosage of 2.5 mg/kg. The duration of treatment varies depending on the individual case. In 2013, two children survived and recovered from primary amoebic meningoencephalitis after being treated with miltefosine. In 2016, another child became the fourth person in the United States to survive Naegleria fowleri infection after receiving treatment that included miltefosine. In 2013, during the summer, a 12-year-old girl from Arkansas was successfully treated with a drug called Miltefisine for PAM, leading to full neurological recovery. The treatment also involved the administration of MLT, AmB, FCZ, AZM, RIF, and Dexamethasone (DEX). In addition, the girl was put into a hypothermic state to manage intracranial pressure and reduce brain injury caused by hyperinflammation. The treatment regimen used in a 12-year-old girl was later administered to two other patients, a 12-year-old boy and an 8-year-old boy. One of them survived but experienced poor neurological function, including static encephalopathy, profound mental disability, and partial seizure disorder control with anticonvulsant therapy. The other patient unfortunately did not survive and was declared brain-dead on hospital day 16 due to brain herniation. Her treatment plan included all the medications administered to the two patients. Notable distinctions in her medical progress and treatment compared to the two patients mentioned here involved receiving Naegleria-specific drugs about 48 hours after symptoms appeared and undergoing intensive management of increased pressure inside the skull, which included therapeutic hypothermia. The three patients discussed in this report all received miltefosine as part of their treatment for Naegleria infection, but their outcomes varied widely. One patient died, another survived with significant neurological impairment, and the third survived with full neurological recovery. This shows that while miltefosine shows promise as a treatment for Naegleria infection, it is not a guaranteed cure. Since 2013, all surviving US patients with Naegleria infection received miltefosine, compared to only a third of fatal cases. Successful treatment of Naegleria infection likely involves early diagnosis, combination drug therapy (including miltefosine), and aggressive management of elevated intracranial pressure, similar to the approach used in treating traumatic brain injury. Azithromycin Although amphotericin B is the first choice for treating primary amebic meningoencephalitis, it is often associated with renal toxicity, leading to azotemia and hypokalemia. Furthermore, not all patients treated with amphotericin B have survived primary amebic meningoencephalitis. In a study, researchers found that azithromycin, a macrolide antibiotic, was chosen for the study based on previous reports describing the in vitro sensitivity of Acanthamoeba spp. to this drug and its activity in experimental toxoplasmosis. The researchers discovered that azithromycin was highly active against N. fowleri in vitro and that it protected 100% of mice infected with N. fowleri at a dose of 75mg/kg of body weight per day for 5 days. In contrast, amphotericin B only protected 50% of mice at a dose of 7.5 mg/kg per day, while all control mice died during the 28-day observation period. As azithromycin is a relatively non-toxic agent that might be useful in treating PAM alone or in combination with amphotericin B, the researchers evaluated the combined activity of azithromycin and amphotericin B in vitro and in vivo. In this study, it was discovered that the combination of amphotericin B and azithromycin had a synergistic effect against N. fowleri when used together in fixed concentration ratios of 1:1, 1:3, and 3:1. The study also investigated the combined effect of these two drugs in a mouse model of PAM. It was found that a combination of 2.5 mg/kg of amphotericin B and 25 mg/kg of azithromycin, administered once daily for 5 days, provided 100% protection to mice infected with N. fowleri. In comparison, when used individually, amphotericin B and azithromycin only protected 27% and 40% of mice, respectively. These findings suggest that the combined use of these agents was 100% effective, while each agent alone was less than 50% effective, consistent with the observed synergy in vitro. There is little known about the mechanism of action of azithromycin against N. fowleri. Azithromycin inhibits bacterial protein synthesis by binding to the 50S ribosomal subunit and blocking peptide bond formation and translocation. Azithromycin has been shown to be widely distributed in brain tissue following systemic administration in humans, whereas amphotericin B exhibits poor penetration of the blood-brain barrier. N. fowleri exposed to amphotericin B rounds up and fails to form pseudopodia. The ultrastructural abnormalities included alteration of nuclear shape, degeneration of mitochondria, and the appearance of autophagic vacuoles. The current study shows that azithromycin and amphotericin B have a synergistic effect against the Lee strain of N. fowleri. This suggests that using these two agents together could be an effective treatment for human infections with this organism. Further research should be done to determine the exact range of synergistic activity of these two agents, and additional studies with other agents could be conducted to improve the selection of drugs and the treatment of N. fowleri infection. Rifampin Although rifampin has been used in all of the PAM survivor cases in the United States and Mexico (all three cases in the United States and one survivor in Mexico), its efficacy remains questionable. The main issue is whether enough rifampin enters the central nervous system (CNS) at standard therapeutic doses. Several studies have shown that rifampin reaches favorable concentrations in the CNS, as measured by drug concentrations in the cerebrospinal fluid (CSF). However, a report by Mindermann et al. found significant variations in the concentrations of rifampin in different parts of the CNS. Concentrations in the cerebral extracellular space and in normal brain tissue were measured at 0.32 ± 0.11 μg/ml and 0.29 ± 0.15 μg/ml, respectively. These concentrations would be sufficient to exceed the required minimum inhibitory concentration (MIC) for most susceptible bacteria but might not be enough to eradicate N. fowleri. In an initial report by Thong et al. in 1977, it was found that the natural product rifamycin delayed the growth of N. fowleri by 30 to 35% when used at concentrations of 10 μg/ml over a 3-day period. However, rifamycin lost its ability to inhibit N. fowleri growth by the 6th day of incubation. It was observed that growth inhibition (>80%) was sustained for the entire 6-day period only when higher concentrations of rifampin, a semisynthetic analogue of rifamycin (100 μg/ml), were used. The later report by Ondarza did not show any minimum inhibitory concentration (MIC) for rifampin against N. fowleri. It revealed a 50% inhibitory concentration (IC50) of >32 μg/ml, which was the highest concentration tested in the study. These findings do not provide evidence to support the use of standard doses of rifampin for treating PAM. One issue with using rifampin to treat N. fowleri is the high potential for drug-drug interactions when combined with other medications. Rifampin is known to induce the CYP2 and CYP3 family of monooxygenase enzymes, specifically CYP2C9, CYP2C19, and CYP3A4. The greatest likelihood for interaction with rifampin is when it is used with 14α-demethylase inhibitors, also known as the azole fungistatics. In most cases, miconazole was initially used before switching to fluconazole in more recent cases. 14α-Demethylase is a specific isoform, and there are known interactions between rifampin and fluconazole. When these two drugs are taken together, it leads to significant changes in the way fluconazole is processed in the body: a decrease of more than 20% in the area under the concentration-time curve (AUC), up to a 50% decrease in critically ill patients, at least a 30% increase in clearance rate, and a 28% shorter half-life. Since there has been a demonstrated synergy between 14α-demethylase inhibitors and amphotericin B against N.fowleri, adding rifampin to the combination may not be very beneficial and could actually work against the maximum therapeutic effect of the other agents. Fluconazole In certain instances of Naegleria fowleri infection, amphotericin B has been used with the azole antifungal medication fluconazole for therapy. Studies show that some patients benefit further from fluconazole added to amphotericin B medication.Fluconazole may be more effective than amphotericin B because it reaches the central nervous system (CNS) more profoundly. Fluconazole and amphotericin B have synergistic actions that help to eradicate N. fowleri infection, presumably as a result of neutrophil recruitment. As a result, fluconazole can be used as an addition to amphotericin B in individuals suspected of having N. fowleri.The CDC advises administering intravenous fluconazole once day at a dose of 10 mg/kg/day, up to a maximum of 600 mg/day, for a total of 28 days. The goal of this dosage schedule is to maximize fluconazole's therapeutic efficacy in N. fowleri infections.Another azole antifungal that works well against N. fowleri in vitro is voriconazole, which is effective at doses of at least 1 μg/ml. Specifically, fluconazole has been combined with additional medications, including rifampin, miltefosine, and amphotericin B, to develop a multimodal therapy plan. In addition to fluconazole's inhibition of ergosterol synthesis, amphotericin B works by binding to ergosterol and creating holes in the cell membrane. The medicine miltefosine, which was first created as an anti-leishmanial medication, has demonstrated amoebicidal activity, which increases the efficacy of the treatment plan. Clinical instances have shown that combined therapy can provide better results than monotherapy, but because PAM progresses aggressively and is difficult to diagnose in a timely manner, survival rates are still low. A further difficulty is the blood-brain barrier, which restricts the amount of medication that can reach the infection site. To enhance azoles' and other therapeutic agents' penetration of the central nervous system, research is still being done on dose regimens and drug delivery techniques. Despite these challenges, azoles remain a crucial component in the therapeutic arsenal against brain-eating amoebas. Ongoing research and clinical trials are essential to refine treatment protocols and enhance the efficacy of these drugs, offering hope for improved survival rates in affected patients. Early diagnosis and prompt initiation of combination therapy are vital, emphasizing the need for heightened awareness and rapid medical response to symptoms indicative of PAM. Vaccination as a Potential Treatment Strategy: Vaccines play a crucial role in the treatment strategies for all diseases, including infections caused by Naegleria fowleri. However, despite recent efforts, a universally accepted and officially authorized vaccination for Naegleria fowleri remains undiscovered. This section will address and discuss potential candidate proteins for an innovative method of treating this lethal infection. In a recent study, Gutiérrez-Sánchez et al. (2023) investigated the role of two potential antigen vaccine candidates, a 19 kDA polypeptide and a MP2CL5 peptide, using a BALB/c mouse model (Gutiérrez-Sánchez, 2023). They measured the immunologic response using different methods; including flow cytometry, ELISA, and the investigation of specific antibodies (IgA, IgG, and IgM) in the serum and nasal cavity. According to the results, the use of 19-kDa polypeptide yielded promising results, with a 80% protection rate against Naegleria fowleri infection. In addition, when combined with cholera toxin (CT), this vaccine candidate demonstrated up to 100% protection. Although both antigens showed a specific immune response against infection, a noteworthy increase in the number of T and B lymphocytes was observed by the team in nasal passages and nasal-associated lymphoid tissue. Finally, increased levels of IgA, IgG, and IgM in vaccinated mice were detected, both in the serum and nasal cavity. These findings highlight the potential of these antigens as a promising candidate for a vaccine against Naegleria fowleri infections due to their high efficacy and capacity to offer localized protection. Another separate study was conducted to assess the immunological effects of an mRNA-based vaccination for the treatment of PAM by Naveed and his team using a BALB/c mouse model (Naveed, Muhammad, et al., 2024). They analyzed the responses by measuring T-cell numbers and using IgA and IgG antibody levels in various samples, including mucosal tissue and serum. In this study, increased antibody levels were detected in both samples, suggesting a systemic response against Naegleria fowleri antigens. Moreover, the production of important cytokines such as IFN-γ were also observed. This highlights the enhanced T-cell responses in the mice model. This holds significance since it implies a strong immune response. The group also stated no side or adverse effects on the model during or after the trials, impling its safety. Despite the promising results, more clinical and preclinical trials are needed to decide whether mRNA vaccines hold a potential to treat PAM in patients (Naveed, Muhammad, et al., 2024). Immunoinformatics is another popular technique used by many researchers to develop vaccines in a safer way. In 2023, Sarfraz et al. demonstrated the use of this innovative method to develop a preventative approach for PAM infections. The objective of this study was to find distinct T- and B-cell epitopes through the utilization of diverse screening techniques. Different parameters including cytokine-inductivity, allergenicity, toxicity, antigenicity were taken into account to find epitopes that trigger the immune responses of both cell types (Sarfraz, Asifa, et al., 2023). Although further research and the support of more clinical studies are needed to fully demonstrate its efficacy on patients suffering from PAM infections, a multi-epitope vaccine from the most-identified epitopes of B- and T-cells was constructed as a result of this study. This emphasizes the importance of employing computational techniques in vaccine design, utilizing a fast and efficient method (Sarfraz, Asifa, et al., 2023). Vaccines are a crucial component of therapy strategies. Although there is a dearth of research on vaccine development for Naegleria fowleri infections, multiple studies have consistently shown encouraging and thus promising outcomes. This serves as a reminder that conducting additional study can lead to the identification of more antibodies, which can be advantageous in the treatment of this lethal disease. Conclusion: Naegleria fowleri, often referred to as the "brain-eating amoeba," remains a formidable pathogen due to its rapid pathogenesis and the high fatality rate associated with primary amoebic meningoencephalitis (PAM). This review highlights the critical challenges in diagnosing and treating infections caused by N. fowleri, emphasizing the need for heightened awareness and advanced diagnostic methods to facilitate early detection. The pathogenesis involves the amoeba entering the nasal passages and migrating to the brain, causing extensive neuroinflammation and necrosis. The diagnostic process is complicated by the nonspecific early symptoms of PAM, which mimic bacterial meningitis, necessitating specific cerebrospinal fluid (CSF) examination and advanced laboratory techniques like PCR and immunofluorescence assays. Treatment strategies, though limited, have shown some promise. Amphotericin B remains the cornerstone of therapy, though its efficacy varies, and it is often accompanied by significant adverse effects. Combination therapies, including miltefosine, azithromycin, rifampin, and fluconazole, have shown synergistic effects and improved outcomes in some cases, but these treatments are not universally effective and are often hindered by issues such as drug availability and the ability to penetrate the blood-brain barrier. Moreover, the exploration of innovative treatment strategies, such as vaccine development, presents a promising frontier in combating N. fowleri infections. Studies investigating potential vaccine candidates, such as the 19 kDa polypeptide, MP2CL5 peptide, and mRNA-based vaccines, have shown encouraging results in animal models, highlighting their potential to offer localized and systemic protection against N. fowleri. Immunoinformatics approaches further underscore the potential for developing multi-epitope vaccines, utilizing computational techniques to identify effective T- and B-cell epitopes. In conclusion, while significant progress has been made in understanding the pathogenesis, diagnosis, and treatment of N. fowleri infections, challenges remain. Early and precise diagnosis, aggressive and combination treatment strategies, and innovative approaches like vaccine development are essential to improving patient outcomes. Continued research and clinical trials are crucial to refining these strategies and enhancing the efficacy of treatments, offering hope for better survival rates in the face of this lethal infection. References 1. Gutiérrez-Sánchez, Mara, et al. “Two MP2CL5 Antigen Vaccines from Naegleria Fowleri Stimulate the Immune Response against Meningitis in the BALB/C Model.” Infection and Immunity, U.S. National Library of Medicine, July 2023, www.ncbi.nlm.nih.gov/pmc/articles/pmid/37272791/. Accessed 02 June 2024. 2. Naveed, Muhammad, et al. “Development and Immunological Evaluation of an Mrna-Based Vaccine Targeting Naegleria Fowleri for the Treatment of Primary Amoebic Meningoencephalitis.” Nature News, Nature Publishing Group, 8 Jan. 2024, www.nature.com/articles/s41598-023-51127-8. Accessed 02 June 2024. 3. Sarfraz, Asifa, et al. “Structural Informatics Approach for Designing an Epitope-Based Vaccine against the Brain-Eating Naegleria Fowleri.” Frontiers, 16 Oct. 2023, www.frontiersin.org/journals/immunology/articles/10.3389/fimmu.2023.1284621/full. Accessed 02 June 2024. 4. Gharpure, Radhika, et al. "Epidemiology and clinical characteristics of primary amebic meningoencephalitis caused by Naegleria fowleri: a global review." Clinical Infectious Diseases 73.1 (2021): e19-e27. 5. Debnath, Anjan. "Drug discovery for primary amebic meningoencephalitis: from screen to identification of leads." Expert review of anti-infective therapy 19.9 (2021): 1099-1106. 6. Capewell, Linda G., et al. "Diagnosis, clinical course, and treatment of primary amoebic meningoencephalitis in the United States, 1937–2013." Journal of the Pediatric Infectious Diseases Society 4.4 (2015): e68-e75. 7. Alli, Ammar, et al. "Miltefosine: a miracle drug for meningoencephalitis caused by free-living amoebas." Cureus 13.3 (2021). 8. Güémez, Andrea, and Elisa García. "Primary amoebic meningoencephalitis by Naegleria fowleri: pathogenesis and treatments." Biomolecules 11.9 (2021): 1320. 9. Cope, Jennifer R., et al. "Use of the novel therapeutic agent miltefosine for the treatment of primary amebic meningoencephalitis: report of 1 fatal and 1 surviving case." Clinical Infectious Diseases 62.6 (2016): 774-776. 10. Soltow, Shannon M., and George M. Brenner. "Synergistic activities of azithromycin and amphotericin B against Naegleria fowleri in vitro and in a mouse model of primary amebic meningoencephalitis." Antimicrobial agents and chemotherapy 51.1 (2007): 23-27. 11. Goswick, Shannon M., and George M. Brenner. "Activities of azithromycin and amphotericin B against Naegleria fowleri in vitro and in a mouse model of primary amebic meningoencephalitis." Antimicrobial agents and chemotherapy 47.2 (2003): 524-528. 12. Grace, Eddie, Scott Asbill, and Kris Virga. "Naegleria fowleri: pathogenesis, diagnosis, and treatment options." Antimicrobial agents and chemotherapy 59.11 (2015): 6677-6681. Author Biographies Aditya Tyagi, a rising junior, is passionately pursuing a career as a neurosurgeon. With a deep interest in the medical field, Aditya aims to impact people's lives positively through science and research. His commitment to understanding the human body and finding innovative medical solutions drives his dedication to becoming a neurosurgeon. Aditya is determined to make a meaningful difference in patients' lives through his future work in medicine. Rushdanya Bushra is currently a student at Govt. Hazi Muhammad Mohsin College. From a very young age, Rushdanya has had a keen interest in biology. Rushdanya always longed to delve deeply into this subject, and over time, his interest developed into a habit. He never get bored reading anything related to this field. His interest deepened when Rushdanya began looking through his aunt’s medical books and research works, sparking his own desire to pursue this type of research. Now Rushdanya wants to pursue my higher education in a biology-related subject so that he can engage with his passion. Rushdanya wants to gain a deeper understanding of the mechanisms of living organisms and learn about diseases and their treatments. Deniz is a passionate 17-year-old high school student from Turkey with a dream of becoming a neurologist specializing in neurodegenerative diseases, particularly Parkinson’s. She spends her free time reading books and playing volleyball. Inspired by a fascination with the complexities of the brain, she is dedicated to understanding and someday treating Parkinson’s disease. Her academic pursuits are driven by a desire to contribute meaningfully to neuroscience, aiming to make a difference in the lives of those affected by neurological conditions.
- Epigenetics: All you need to know
Author: Rachana R What is epigenetics? Epigenetics is the study of heritable changes in gene expression that do not involve changes to the underlying DNA sequence — a change in phenotype without a change in genotype — which in turn affects how cells read the genes.The term “epigenetics” came into general use in the early 1940s, when British embryologist Conard Waddington used it to describe the interactions between genes and gene products, which direct development and give rise to an organism’s phenotype. "Epi-"means ‘on or above’ in Greek,and "epigenetic" describes factors beyond the genetic code. Epigenetic changes are modifications to DNA that regulate whether genes are turned on or off.Epigenetic change can have more damaging effects that can result in diseases like cancer. At least three systems including DNA methylation, histone modification and non-coding RNA associated gene silencing are currently considered to initiate and sustain epigenetic change.New and ongoing research is continuously uncovering the role of epigenetics in a variety of human disorders and fatal diseases. What is epigenome? Epigenome is all the genes plus everything that regulates the usage of those genes. An epigenome changes over time. It’s both advantageous and disadvantageous. It’s advantageous that things like nutritious food, exercise and manageable stress can result in epigenetic changes that can promote health. But other factors like processed foods, smoking and lots of stress can cause epigenetic changes that can harm health. The History of Epigenetic Research During the 1990s there became a renewed interest in genetic assimilation. This led to elucidation of the molecular basis of Conrad Waddington’s observations in which environmental stress caused genetic assimilation of certain phenotypic characteristics in Drosophila fruit flies. Since then, research efforts have been focused on unraveling the epigenetic mechanisms related to these types of changes. Currently, DNA methylation is one of the most broadly studied and well-characterized epigenetic modifications dating back to studies done by Griffith and Mahler in 1969 which suggested that DNA methylation may be important in long term memory function. Types of epigenetic modifications The principal type of epigenetic modification that is understood is methylation.Methylation can be transient and can change rapidly during the life span of a cell or organism.The specific location of a given chemical modification is also very important.Other largely permanent chemical modifications also play a role; these include histone acetylation,ubiquitination and phosphorylation. Epigenetic inheritance It is evident that at least some epigenetic modifications are heritable, passed from parents to their offspring in a phenomenon that is generally referred to as epigenetic inheritance or passed down through multiple generations via transgenerational epigenetic inheritance. The mechanism by which epigenetic information is inherited is unclear; however, it is known that this information, because it is not captured in the DNA sequence, is not passed on by the same mechanism as that used for typical genetic information. What are the diseases linked to epigenetics? Aging Diseases associated with aging. Disorders affecting the neurological system(including syndromes which affect intellectual ability) Cancer Asthma Autoimmune diseases Impact of epigenetics on biomedicine From years of research, researchers have recognized that the epigenome influences a wide range of biomedical conditions. This new perception has opened the door to a deeper understanding of normal and abnormal biological processes and has offered the possibility of novel interventions that might prevent certain diseases. Researchers have understood that epigenetic mechanisms play a key role in defining the “potentiality” of stem cells. As those mechanisms become clearer, it may become possible to intervene and effectively alter the developmental state and even the tissue type of given cells. Compared to other areas of study, epigenetics is still fairly new. And there’s a lot yet to be discovered.Scientists continue to explore the relationship between the genome and the chemical compounds that modify it. In particular, they are studying the effects that epigenetic modifications and errors have on gene function, protein production, and human health. References : 1.What is epigenetics. 2017. “Epigenetics: Fundamentals, History, and Examples | What Is Epigenetics?” What Is Epigenetics? 2017. https://www.whatisepigenetics.com/fundamentals/. 2.MedlinePlus. 2021. “What Is Epigenetics?” Medlineplus.gov. June 11, 2021. https://medlineplus.gov/genetics/understanding/howgeneswork/epigenome/. 3.Rogers , Kara, and Judith L Fridovich-Keil. 2018. “Epigenetics | Definition, Inheritance, & Disease.” In Encyclopædia Britannica. https://www.britannica.com/science/epigenetics. 4.“What Is Epigenetics?” n.d. Cleveland Clinic. https://my.clevelandclinic.org/health/articles/epigenetics.
- Advances in Nanotechnology in Medicine for Targeted Drug Delivery in Humans
Author: Jaxon Pang Abstract Nanotechnology allows for the development and implementation of medicinal usage to operate on the nanoscale. Also being referred to nanomedicine in this field, the application of nanotechnology towards medicinal research has been a result of technological advancements throughout the years which have allowed scientists to develop and improve methods of dealing with various illnesses and diseases. However, recent research has also demonstrated instances of particle toxicity, known as nanotoxicology, stirring concern regarding the clinical use of nanotechnology, causing harm to both hosts and the surrounding environment. This paper will aim to deliver the aspects of nanomedicine with the potential of treating severe illnesses whilst also providing insight into their impacts on the same people and the wider world, and most importantly how some of these social and environmental issues could be potentially solved. 1.0 Introduction and History The rapidly developing world of the 21st century provides an extensive array of resources in biological and chemical engineering. With this, scientists and researchers have been able to further develop nanotechnology to accommodate the growing difficulty in diagnosing and treating various resistant or incurable diseases. However, the birth of nanotechnology really dated back to 1959, from American physicist and Nobel Prize laureate Richard Feynman, who introduced the concept of nanotechnology at the annual meeting of the American Physical Society, hosted at the California Institute of Technology, where he presented a lecture titled “There’s Plenty of Room at the Bottom”. He described a vision of using machines to construct smaller machines down to the molecular level, where his hypotheses were eventually proven correct, considering him to be the father of modern nanotechnology. 15 years later, Norio Taniguchi, a Japanese scientist, was the first to use and define the term “nanotechnology” in 1974, being that “nanotechnology mainly consists of the processing of separation, consolidation, and deformation of materials by one atom or one molecule”. After this discovery, two manufacturing approaches have been developed to interpret the synthesis of these nanostructures: Top-down and Bottom-up, both differing in speed, quality, and cost. The Top-down approach involved the breaking down of material into nano sized material, whilst the Bottom-up approach referred to the buildup of nanostructures from its basis using chemical and physical methods, atom by atom. These concepts essentially formed the fundamental aspects of nanoscience applications and allowed the idea of the build-up of complex machines from individual atoms which can independently manipulate molecules and atoms to produce self-assembly nanostructures, to become a reality. Recent studies in recent years have highlighted how the implementation of nanotechnology into biomedicine has shown great potential. Examples included using nanoparticles to help with the diagnosis of many human diseases, and even drug delivery and molecular imaging. These achievements via intensive research yielded great results. Remarkably, many medical related products containing nanomaterials are on the market in the United States. These are usually categorised under “nanopharmaceuticals”, which include nanomaterials with the purpose of drug delivery and regenerative medicine. More developed types of nanoparticles included covering antibacterial activities as well. Progress has also been made in the field of nano-oncology. By improving the efficacy of traditional chemotherapy drugs targeting a range of aggressive human cancers, researchers have successfully achieved in targeting the tumour site with several functional molecules such as nanoparticles, antibodies and cytotoxic agents. In this case, studies have shown how nanomaterials can be employed itself to deliver therapeutic molecules to regulate and control essential biological processes such as autophagy, metabolism, anticancer activity, and oxidative stress.1 However, implementing nanoparticles into medicine do have its own set of drawbacks. Due to their specific properties, such as their increased surface area, it results in increased reactivity and biological activity, meaning that there is an increased chance that contact with nanoparticles may cause permanent damage to the central nervous system.2 It is suggested that toxicities are inversely proportional to the size of the nanoparticles, thus nanoparticles in general are more toxic to human health in comparison to larger sized particles of the same chemical substance.3 This makes the risk of infusing nanoparticles into human health greater as while they cause similar effects from other foreign particles injected into a human, such as inflammation or lung cancer, they may be more potent due to their greater surface area.4 All in all, with the huge imbalance between the advantages and disadvantages of incorporating nanotechnology into humans for medicinal applications, it begs the question of whether nanoparticles should really be used in the field of medicine, and the pros and cons from both points of view. 2.0 Background 2.1 The increased incorporation of nanotechnology into medicine In essence, the utilisation of nanodevices can be used in diagnostics for early and rapid disease identification for immediate medical procedural recommendations. Using nanoparticles to assist in medical diagnosis means that various, undetectable diseases could be discerned as early as possible, and the appropriate medication could be administered to treat the disease before it gets too serious. Even if the detected illness has no effective treatment yet, researchers can use the nanoparticles to test and analyse the disease to conjure a temporary cure, if not a permanent one. Due to its molecular scale, nanotechnology has the potential to revolutionise the field of healthcare diagnostics due to its improved accuracy, sensitivity, and speed of medical tests compared to other diagnostic medical equipment. The needs and applications of nanomaterials in many areas of human endeavours such as industry, agriculture, business, medicine and public health have skyrocketed its popularity. Between 1997 and 2005, investment in nanotechnology research and development by governments around the world soared from $432 million to about $4.1 billion, and the corresponding industry investment exceeded that of the government’s investment by 2005. By 2015, products incorporating nanotechnology contributed to approximately $1 trillion to the global economy, as depicted in Figure 1 by Lux Research: (Figure 1)5 About two million workers will be employed in nanotechnology industries, and three times that many will have supporting jobs in the future, as predicted by Lux Research.6 The drive for technological development fuels this idea of increased usage of nanoparticles. For example, diagnostic imaging provides a visual interpretation of the interior of an organism, such as organs or tissues. Utilising nanoparticles in diagnostic imaging allows for the enhancement of imaging modalities such as MRIs or computerised tomography scans, making them a lot clearer and accurate. Using nanoparticles can also incorporate the detection of life-threatening diseases such as various cancers quickly at an earlier state to enable timely treatment and prevention. In addition, other biosensors using nanoparticles have been developed which can detect low levels of biomolecules in fluids such as in blood or in urine, once again leading to the facilitating of early detection and management. Similar dimensional applications have been used in the form of nanofluidic devices to isolate and analyse specific cells, proteins, and genetic material to provide rapid and accurate diagnosis of diseases. Recently, an increase in the association between nanotechnology and drug deliveries have been evident, through various technologies and systems. This correlates with gene therapy. By incorporating nanotechnology into drug delivery, researchers have been able to enable effective and targeted drug delivery with minimal side effects, increasing the therapeutic efficacy of the drugs. Furthermore, by using DNA-based drug delivery, which are drug delivery devices recently introduced, such as DNA guns and DNA vaccines, it also enhances drug delivery by specifically delivering drugs to the target site and reducing the toxicity associated, while simultaneously protecting the DNA molecules from degrading, modifying DNA sequences and correcting genetic mutations to increase the efficiency and safety of gene therapy as well. In short, with the development of technology in the field of medicine, researchers have taken massive steps in these discoveries to their advantage to produce and research further into nanomedicine to find ways to continue diagnosing and treating diseases. Progress is aiming for maximum efficacy while simultaneously causing minimal side effects and potential harm towards the recipients.7 2.2 Impacts of nanomedicine in humans Primarily, people have been exposed to various nano-scale materials since childhood, and this new, emerging field of nanotechnology has become another potential threat to human life. Due to their small size, it is much easier for nanoparticles to enter the human body and cross the various biological barriers, hence the possibility of them reaching the most sensitive organs. Scientists have proposed that nanoparticles of size less than 10 nm would act like a gas and can enter human tissues easily, likely to disrupt the cell normal biochemical environment. Animals and human studies have shown that after inhalation, and through oral exposure, nanoparticles are distributed to the liver, heart, spleen, and brains well as to the lungs and gastrointestinal tract. To clear these nanoparticles from the body, components of the immune system are activated, yet research shows that the estimated half-life of nanoparticles in human lungs is about 700 days, posing a consistent threat to the respiratory system. During metabolism, some of the nanoparticles are congregated in the liver tissues. They are more toxic to human health in comparison to larger sized particles of the same chemical substance, and it is usually suggested that toxicities are inversely proportional to the size of the nanoparticles. Due to their physicochemical properties in different biological systems, unpredictable health outcomes of these nanoparticles were eminent to scientists.8 In general, properties such as absorption, distribution, metabolism, and clearance contribute to their toxicological profile in biological systems. Toxicological concerns means that size, shape, surface area, and chemical compounds need to be considered during the manufacture of these nanoparticles, as they can exert mechanisms of cytotoxicity that interfere with cellular homeostasis. The toxicity of nanoparticles also depends on the chemical components on their surfaces. Some metal oxides, such as zinc oxide (ZnO), manganese oxide (Mn3O4), and iron oxide (Fe3O4), have intrinsic toxicity potential. Particularly with iron oxide due to its frequent usage in nanomedicine. Nanoparticles made from these metal oxides in general can induce cytotoxic effects, meaning they cause harm to cells. However, these adverse effects are often very useful in cancer cell therapies, so it is not completely destructive. Another chemical component investigated in the context of nanoparticle toxicity is silver (Ag), as it can be widely used and is easily found in the environment. The cytotoxic effects of silver nanoparticles include induction of stress, DNA damage, and apoptosis, which is essentially the elimination of unwanted cells.9 The PM10 Literature, which is the largest database on nanoparticle toxicity that originated from inhalation, highlights the particle terminology in relation to ambient effects. The data is provided in the following table, labelled Table 1: (Table 1)10 PM10 particles are particles with diameters with 10 micrometers or less, having proven to be a powerful drive for research with nanoparticles. Due to their potential toxicity and small size, when breathed they penetrate the lungs. High concentrations of exposure to this could lead to effects such as coughing, wheezing, asthma attacks, bronchitis, high blood pressure, heart attacks, strokes and even premature death.11 The table gives insight into the difference in toxicity of engineered nanoparticles. Most of the PM10 mass is considered to be non-toxic and so the idea that there are components within PM10 which drives the pro inflammatory effects have risen, making other particles like CDNP (Combustion Derived Nanoparticles) to be a much more likely candidate, more on that in Section 2.3. The higher numbers of nanoparticles used, in addition to their small size, suggests that they each have a larger surface area per unit mass, or a larger surface area to volume ratio. Particle toxicology suggests that for toxic particles generally, increased particle surface equals to increased toxicity. Substantial toxicological data and limited data from epidemiological sources also support the contention that nanoparticles in PM10 are important drivers of such adverse effects as well. These adverse effects include, but are not limited to, respiratory diseases such as cardiovascular disease, or inflammations in the interior such as pulmonary inflammation. This could result in changes in membrane permeability, which may in turn impact the potential for particles to distribute beyond the lung. Such instances include the impairing of vascular function after the inhalation of diesel exhaust pipes. The downside of these data collected is that it is still limited and not all studies of nanoparticles have shown significant translocation from the lung to the blood. In the past decade, the most striking effects of nanoparticles have been observed and recorded, and is displayed in Table 2, along with the particle type which has been tested with: (Table 2)12 The key difference between the dangers in nanoparticles and “traditional” particles is that due to its reduced volume and size, they simply would be more potent to cause similar effects. Several of these effects are just quantitatively different from fine particles. The other large problem is that the introduction of nanoparticles also gave rise to new types of effects not seen previously in larger particles. For instance, mitochondrial damage in ambient nanoparticles, infections through olfactory epithelium in manganese dioxide, gold, and carbon substances, platelet aggregation from single walled carbon nanotubes (SWCNT) and latex carboxylic acids, and cardiovascular effects from PM particles and SWCNTs.13 Other reported risks of nanoparticles are summarised in Table 3: (Table 3)14 Drawing attention back to iron oxide (Fe3O4) nanoparticles, these nanoparticle compounds have been used in drug delivery and diagnostic fields for a duration of time now. These nanoparticles bioaccumulate in the liver and other organs in different organ systems. In vivo studies have shown that after entering the cells, iron oxide nanoparticles remain in cell organelles such as endosomes and lysosomes, release into cytoplasm after decomposing, and contribute to cellular iron poll. Magnetic iron oxide nanoparticles have been observed to accumulate in the liver, spleen, lungs, and brain after inhalation, showing its ability to cross the blood brain barrier. Research shows that the toxic effects are exerted in the form of cell lysis, inflammation, and blood coagulation.15 The exposure of cells to a high dose of iron oxide nanoparticles leads to the formation of excess reactive oxygen species (ROS), which is essentially a type of unstable molecule which contains oxygen in its cell and can easily react with other molecules in the cell. This can affect the normal cell with corresponding apoptosis or cell death. Similar metals like iron, zinc, magnesium, etc. also negatively impact certain genes associated with age related proteins and longevity, and hence, could potentially be detrimental.16 Reduced cell viability (healthiness) has been reported as one of the most common toxic effects of iron oxide nanoparticles in in vitro studies. Iron oxide nanoparticles coated with different substances have shown varying cell viability results. For instance, the toxicity of the tween-coated supermagnetic iron oxide nanoparticles, which has 30 nm in diameter, on murine macrophage cells, has been reported that low concentration of these iron oxide nanoparticles (ranging between 25 -200 µg/mL for 2 hours of exposure) shows an increase in cell toxicity in comparison to high concentrations (ranging between 300 – 500 µg/mL for 6 hours of exposure). Dextran-coated iron oxide nanoparticles, which are a biocompatible material extensively used in biomedical applications to coat nanoparticles to prevent agglomeration and toxicity, still yielded results of varying degrees in cell toxicity after 7 days of incubation despite this added protection.17 Overall, this highlights how the cytotoxicity in metal components in nanoparticles makes their application in medicinal use to be dangerous for the patient. 2.3 - Impacts of nanotechnology on the environment Unfortunately, the implementation of nanotechnology in medicine, although aims to target organisms, would have adverse side effects that aren’t limited to the organism itself. The process of manufacturing these nanomaterials, results in these nanoparticles entering the environment through intentional releases as well as unintentional releases such as atmospheric emissions and solid or liquid waste streams from production facilities. Furthermore, nanoparticles used in other products such as paint, fabric, or personal and healthcare products also enter the environment proportional to their usage. Especially in today’s economy, the purchasing of these cosmetic items and beauty products are at an all-time high. More of that in Section 2.4. Emitted nanoparticles would ultimately be deposited on land and water surfaces. Nanoparticles on land have the potential to contaminate soil and migrate into surface and ground waters. These particles in solid wastes, wastewater effluents, or even accidental spillages can be transported to other aquatic systems by wind or rainwater runoff, severely damaging ecosystems and other natural habitats, simultaneously destroying the homes of other living organisms and possibly the organisms themselves as well. However, the biggest release in the environment tends to come from spillages associated with the transportation of manufactured nanomaterials from production facilities and other manufacturing sites, with the aim of intentional releases for environmental applications. Once again, exposure through inhalation occurring is a leading factor of nanoparticle dangers. Airborne particles composed of nanomaterials with miniscule sizes may agglomerate into larger particles or longer fibre chains, changing their properties and potentially impacting their behaviour in the indoor and outdoor environments as well. This would in turn affect the way they’d affect the human body after exposure and entry.18 CDNPs are also an important component that drives the adverse effects of environmental particulate air pollution. Combustion-derived nanoparticles originate from several sources such as diesel soot, welding fume, carbon black and coal fly ash. Besides affecting people by inducing oxidative stress and exerting genotoxic effects, their components are an environmental and occupational hazard. Diesel exhaust particles are the most common CDNP in urban environmental air and in environmental pollution. Pulverised coal combustion is a common and efficient method of coal burning in power stations. The pulverised coal is blown into the furnace and burned off producing a fly ash emission. While this particulate emission is usually controlled and moderated, these control methods are not 100% effective, and some particles are still released into the environment.19 On of the largest problems with this is that there is a large gap in the literature of research between nanotoxicology in humans and in the environment. Out of the 117 included studies, by BMC Public Health, only 5 had assessed the environmental impact of exposure to nanoparticles. The significant gap in the scientific literature has been highlighted by multiple authors. With the growing production and usage of nanoparticles, undoubtedly this has gradually led to a diversification of emission resources into both the aquatic and soil environment. As the release of nanoparticles into the environment primarily enter during its production, during application, and disposal of products containing nanoparticles as stated previously, these emissions occur both indirectly and directly to the environment. Nevertheless, the most prominent way in which nanoparticles are released is during the application phase and afterwards, the disposal phase. Studies have shown that only about 2% of the production volume is emitted. Further of use of biomarkers such as soil samples and soybean seeds have been used as natural checks and to determine the toxicity of the environment.20 The following study, released in papers by Medline, ScienceDirect, Sage Journals Online, Campbell Collaboration, Cochrane Collaboration, Embase, Scopus, Web of Science, CINAHL, Google, and Google Scholar, includes reviews from 23 countries across several continents, the majority originating from Europe and Central Asia. Reportedly the United States had the highest number of publications, followed by China, India, and Saudi Arabia. Yet most of the studies focused on assessed impact on human health, while only 5 studies focused on assessed effects on the environment, and a shocking 3 studies on both human health and environmental impact. This can be depicted in Figure 2: (Figure 2)21 The studies investigated the environmental and human health effects from inorganic-based nanoparticles, as well as carbon-based nanoparticles. What is more concerning is the fact that attention was diverged unequally, focusing more on the human health aspect in comparison to their impacts on the surrounding environment as well. As a result, despite knowing that most nanoparticles are toxic to some extent on their surfaces, less research has been conducted to determine effectively how to reduce this toxicity to make them more compatible to the environment and negatively impact it less. 2.4 - The shaping of the economy and the wider world Science is a social endeavour. Inevitably, it will be tangled up with socioeconomic issues. Whilst science logistically would be operating outside the controversial hand of politics, in the end, innovation in the scientific field affects these economic considerations and inequalities. Technology itself is neutral; its capacity and potent can be operated by anyone. Yet none of these technological advancements are impartial because in every usage of this, there will always be a motive and a need for profit.22 Unfortunately, the method of producing and manufacturing advanced technologies such as nanoparticles is not a simple, nor cheap process. Naturally, that would mean that using it for targeted drug delivery and other utilisations would be an expensive method of medical treatment. For instance, the business of nanomedicine had an estimated value of $53 billion USD in 2009. By 2025, the industry is projected to reach a total market value of around $334 billion USD. The increasing number of new generation nanotherapeutics will soon enter the market, upping the market value overall. According to the Grand View Research Report, the United States remains the leader in the nanomedicine industry, owning 46% of the total international nanomedicine revenue in 2016, followed by Europe, including major industries in the market such as Pfizer.23 With western countries being primarily the driving force in the advancement of technological development, the distribution of such equipment would be heavily tilted to one side. Less fortunate countries would unlikely be seeing the implementation of nanotechnology in medicine as opposed to other high-income countries. Personalised medicine, for things such as rare diseases, further creates obstacles for drug development, which once again, is more prominent in the countries lacking sufficient healthcare. As a result, large biotech companies find these places and medicine less financially rewarding to invest in compared with universal drugs. Furthermore, nanotechnology connotes the use of the most advanced technological tools for medical ends, for which reason discussions over its socioeconomic effects are of heavy significance, even though they are thoroughly missing from today's discourse. This gives rise to ethical issues, highlighting how geopolitics come into play with the handling of medicine. Especially with countries in power and the wealth disparities, this emphasises the detrimental gap between the rich and the poor and thus the accessibility of nanomedicine around the world as well.24 3.0 - Methodology Fortunately, the technological advancements we benefit from today has allowed scientists and researchers to have developed better nanotechnology and refined flaws. These include, but are not limited to, using different metal compounds and compositions in nanoparticles to reduce toxicity, even more precise nanoparticles for precision in disease diagnosis and prevention, making them more biocompatible, and DNA specialised nanoparticles. One such example is theranostic nanoparticles. Theranostic particles are essentially multifunctional nanomaterial systems, well designed and specialised in specific and personalised disease management by combining diagnostic and therapeutic capabilities into one biocompatible and biodegradable nanoparticle. The engineering of theranostic particles can be done through multiple ways. For instance, loading therapeutic drugs like anti-cancer drugs into existing imaging nanoparticles such as quantum dots or iron oxide nanoparticles, or engineering unique nanoparticles such as porphysome technology with intrinsic imaging and therapeutic properties, alongside modifications with polyethylene glycol and different targeting ligands to improve blood circulation half-life and tumour active targeting capability. This helps solve the problems with nanoparticles accumulating in tumour tissue based on its enhanced permeability and retention effect. Tumour actively targeted theranostic nanoparticles are being developed by further conjugating different targeting ligands to recognise and selectively bind to receptors overexpressing on certain tumour cell surfaces, such as tumour vasculature targeting, which has been considered a good, applicable approach for most functioned organic and inorganic nanomaterials. The targeting ligands could include antibodies, small peptides or molecules, engineered proteins, etc. In addition, theranostic nanoparticles have been developed to target other receptors, such as prostate-specific membrane antigens (PSMA) in prostate cancer and the urokinase plasminogen activator receptor (uPAR) in pancreatic cancer. Theranostic nanoplexes, which contain multi-treating therapy imaging reporters like radioisotopes, and a PSMA-targeting component, was developed to deliver small interfering RNA (siRNA) and a prodrug enzyme to PSMA-expressing tumours. This nanoplex was investigated using the non-invasive multi-treating therapy imaging to evaluate its diagnostic aspects of PSMA imaging, as well as its conversion of prodrug to cytotoxic drug. Results showed that there was no significant immune response or obvious toxicity to the liver or kidney that had been observed. However, the downside to theranostic nanoparticles is that ideally, they must be able to quickly and selectively accumulate in targets of interest, be able to report biochemical and morphological characteristics of diseases, efficiently deliver the sufficient number of drugs on demand without damaging healthy organs and be cleared from the body within hours or biodegraded into nontoxic byproducts. Even though numerous types of both organic and inorganic theranostic nanoparticles have been developed in the last decade for treating diseases such as cancer, none of them has satisfied all the specified criteria yet.25 CRISPR/Cas systems are also a prime example of how nanotechnology has been effectively implemented in medicine. CRISPR is a revolutionised technology with the ability to cleave target nucleic acids with high precision and programmability. However, issues such as insufficient cellular entry, degradation in biological media, and off-target effects makes CRISPR unreliable at times. By incorporating nanotechnology into this, it enables and improves intracellular and targeted delivery, stability, stimulus-responsive activation in target tissues, and adjustable pharmacological properties. Nanotechnology can also enhance CRISPR-mediated detection by increasing sensitivity, facilitating simpler readouts during implementations, as well as reducing time to readout.26 The Cas-9 enzyme is a large ribonucleoprotein (RNP), which helps with transcription, translation and regulating gene expression and regulating the metabolism of RNA. The incorporation of nanotechnology with Cas systems provides a powerful new means for the fast, specific, and ultrasensitive detection of protein biomarkers, whole cells, and small molecules, in addition to nucleic acid targets. Using nanomaterials as signal readouts can enhance detection sensitivity or reduce the need for special equipment for signal detection.27 Nanorobots are a relatively new technology that contain small monitors that allow them to navigate to specific parts of the body. They have had primary applications so far as drug delivery agents. One example of this is the “origami nanorobot” developed by researchers at Arizona State University, consisting of a flat sheet of synthetic DNA that is coated in a blood-clotting enzyme and can be folded into various shapes. It is injected into the bloodstream and programmed to seek out tumour cells. When located, it attaches to their surface and injects them with a blood-clotting enzyme, starving the tumour cells of the blood it needs to survive on. Further research carried by ASU showed promising therapeutic potential from the way it overall prevented the spread of metastasis in cancer. “Smart pills” are also utilised as nanoscale sensors that are designed to detect the presence of diseases long before the symptoms may become apparent to the present. Invented by Jerome Schentag, a professor of pharmaceutic science at the University of Buffalo, it aimed to electronically track and instruct the delivery of a drug to a predetermined location in the gastrointestinal tract. In addition, the built in miniature camera helps monitor the bowels or colon to detect internal bleeding. Data collected by the pill is transmitted wirelessly to a device controlled by a patient, allowing for continuous monitoring of internal health conveniently.28 Finally, Green Nanotechnology-Driven Drug Delivery Assemblies aim to help reduce the toxicity of the nanoparticles used, to benefit both the target organisms and the surrounding environment. By employing the concept of green chemistry and green engineering into the manufacturing of nanobiomedicine, the aim is to create eco-friendly nanoassemblies with less environmental and health related negative impacts. As a result, the combination of green nanomaterials with drugs, vaccines, or diagnostic markers will hopefully be the next step to propel the field of green nanomedicine. Many inorganic nanoparticles have been introduced to the market and manufactured on the principles of green engineering and nanotechnology. For example, gold and silver nanoparticles are less toxic compared to other metals like copper and zinc. Quantum dots, organic polymeric nanoparticles, mesoporous silica nanoparticles, dendrimers, and nanostructuredi lipid carriers have also been used. These nanomaterials are attached with drugs, DNA molecules, or specific enzymes, proteins or peptides for further handling in nanomedicine purposes. Research now continues to establish the differences and effectiveness in the yield of nanomedicine produced using normal bioengineering compared to manufacturing through elaborative green bioengineering principles. This will allow scientists to opt for the best manufacturing conditions for nanoparticles in the future. Especially with DNA molecules, using DNA based drug delivery devices in nanotechnology aims to increase personalised targeted drug therapies to further improve diagnosis and target drug delivery.29 Conclusion In summary, the innovations in technology over the years have undoubtedly brought into light newer and improved methods to diagnose and treat patients in the medical field. The progress made with nanotechnology is proof of its immense potential and its history of successes highlights its usefulness and capabilities. Most vitally, its ability to act as a drug carrier for specific drug delivery is what makes the usage of nanotechnology so special and powerful, especially in cases when dealing with diseases and illnesses which are difficult to cure, such as cancer. By being able to provide detailed visualisations of organism interiors, to assist in genetic modification and editing genetic information, the versatility of nanoparticles is something that shouldn’t be overlooked either. As discussed above, however, the drawbacks of the implementation of nanoparticles in medicine must not be overlooked either. Nanotoxicology has been proven time and time again for its setbacks, and the way it can affect both organisms and the surrounding environment brings into question whether the usage of nanoparticles is really the best strategy in the medicinal field currently. Not to mention the extreme costs for this utilisation, which, amongst many other things, contribute to the increasing economic gap between income classes. Nevertheless, history has shown how the relentless efforts of scientists and researchers have overcome obstacles in the medicinal field, and undoubtedly, these drawbacks provided by nanotechnology would be resolved in future years, as I strongly believe the potential of using nanotechnology for targeted drug delivery is too much to pass up on, and I think many others would feel the may. References [1] PubMed Central. (2019). The History of Nanoscience and Nanotechnology: From Chemical–Physical Applications to Nanomedicine. [Online]. National Center for Biotechnology Information. Last Updated: 27 December 2019. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6982820/#:~:text=The%20American%20physicist%20and%20Nob [Accessed 18 February 2024]. [2] PubMed Central. (2018). Impact of Nanoparticles on Brain Health: An Up to Date Overview. [Online]. National Center for Biotechnology Information. Last Updated: 7 December 2018. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6306759/#:~:text=The%20main%20routes%20of%20exposure,th [Accessed 18 February 2024]. [3] PubMed Central. (2016). Toxicity of Nanoparticles and an Overview of Current Experimental Models. [Online]. National Center for Biotechnology Information. Last Updated: 20 January 2016. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4689276/ [Accessed 18 February 2024]. [4] PubMed Central. (2008). Drug delivery and nanoparticles: Applications and hazards. [Online]. National Center for Biotechnology Information. Last Updated: 3 June 2008. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2527668/#:~:text=Toxicological%20effects%20of%20nanopar [Accessed 18 February 2024]. [5] National Center for Biotechnology Information.. (n.a). J Environ Sci Health C Environ Carcinog Ecotoxicol. [Online]. NCBI. Available at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2844666/figure/F1/. [Accessed 18 February 2024]. [6] PubMed Central. (2010). Toxicity and Environmental Risks of Nanomaterials: Challenges and Future Needs. [Online]. National Center for Biotechnology Information. Last Updated: 24 March 2010. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2844666/ [Accessed 18 February 2024]. [7] PubMed Central. (14 September 2023). Emerging Applications of Nanotechnology in Healthcare and Medicine. [Online]. National Center for Biotechnology Information. Last Updated: 28 September 2023. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10536529/#:~:text=Nanotechnology%20is%20used%20to%20con [Accessed 18 February 2024]. [8] PubMed Central. (2016). Toxicity of Nanoparticles and an Overview of Current Experimental Models. [Online]. National Center for Biotechnology Information. Last Updated: 20 January 2016. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4689276/ [Accessed 18 February 2024]. [9] PubMed Central. (2018). Nanoparticles in Medicine: A Focus on Vascular Oxidative Stress. [Online]. National Center for Biotechnology Information. Last Updated: 26 September 2018. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6178176/ [Accessed 18 February 2024]. [10] National Center for Biotechnology Information.. (n.a). Various denominations of particles in inhalation toxicology and drug delivery in. [Online]. PubMed Central. Available at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2527668/table/tbl3/?report=objectonly. [Accessed 18 February 2024]. [11] Marlborough District Council. (n.a). Health effects of PM10. [Online]. Marlborough District Council. Last Updated: n.a. Available at: https://www.marlborough.govt.nz/environment/air-quality/smoke-and-smog/health-effects-of-pm10#:~:tex [Accessed 18 February 2024]. [12] National Center for Biotechnology Information.. (n.a). Toxicity of engineered and combustion (nano) particles as illustrated by their m. [Online]. PubMed Central. Available at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2527668/table/tbl4/?report=objectonly. [Accessed 18 February 2024]. [13] PubMed Central. (2008). Drug delivery and nanoparticles: Applications and hazards. [Online]. National Center for Biotechnology Information. Last Updated: June 2008. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2527668/#:~:text=Toxicological%20effects%20of%20nanopar [Accessed 18 February 2024]. [14] National Center for Biotechnology Information.. (n.a). Possible Risks of Nanomaterials. [Online]. PubMed Central. Available at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2844666/table/T1/?report=objectonly. [Accessed 18 February 2024]. [15] 15 PubMed Central. (2016). Toxicity of Nanoparticles and an Overview of Current Experimental Models. [Online]. National Center for Biotechnology Information. Last Updated: 20 January 2016. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4689276/ [Accessed 18 February 2024]. [16] Elsevier. (2021). Iron oxide nanoparticles in biological systems: Antibacterial and toxicology perspective. [Online]. ScienceDirect. Last Updated: December 2021. Available at: https://www.sciencedirect.com/science/article/pii/S2666934X2100026X#:~:text=However%2C%20exposing%20 [Accessed 18 February 2024]. [17] PubMed Central. (2016). Toxicity of Nanoparticles and an Overview of Current Experimental Models. [Online]. National Center for Biotechnology Information. Last Updated: 20 January 2016. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4689276/ [Accessed 18 February 2024]. [18] PubMed Central. (2010). Toxicity and Environmental Risks of Nanomaterials: Challenges and Future Needs. [Online]. National Center for Biotechnology Information. Last Updated: 24 March 2010. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2844666/ [Accessed 18 February 2024]. [19] PubMed Central. (2005). Combustion-derived nanoparticles: A review of their toxicology following inhalation exposure. [Online]. National Center for Biotechnology Information. Last Updated: 2005. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1280930/ [Accessed 18 February 2024]. [20] BMC. (2023). Human and environmental impacts of nanoparticles: a scoping review of the current literature. [Online]. BMC Public Health. Last Updated: 2023. Available at: https://bmcpublichealth.biomedcentral.com/articles/10.1186/s12889-023-15958-4#:~:text=Using%20severa [Accessed 18 February 2024]. [21] National Center for Biotechnology Information.. (n.a). Effect/impact of nanoparticles on human/environmental health. [Online]. PubMed Central. Available at https://bmcpublichealth.biomedcentral.com/articles/10.1186/s12889-023-15958-4/figures/3. [Accessed 18 February 2024]. [22] PubMed Central. (2021). Nanomedicine for the poor: a lost cause or an idea whose time has yet to come?. [Online]. National Center for Biotechnology Information. Last Updated: May 14 2021. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8120867/ [Accessed 18 February 2024]. [23] PubMed Central. (2019). Cost–effectiveness of nanomedicine: estimating the real size of nano-costs. [Online]. National Center for Biotechnology Information. Last Updated: 6 June 2019. Available at: https://www.futuremedicine.com/doi/10.2217/nnm-2019-0130 [Accessed 18 February 2024]. [24] PubMed Central. (2021). Nanomedicine for the poor: a lost cause or an idea whose time has yet to come?. [Online]. National Center for Biotechnology Information. Last Updated: 14 May 2021. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8120867/ [Accessed 18 February 2024]. [25] PubMed Central. (2014). Theranostic Nanoparticles. [Online]. National Center for Biotechnology Information. Last Updated: 1 December 2015. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4255955/#:~:text=Theranostic%20nanoparticles%20are%20mu [Accessed 18 February 2024]. [26] Shivudu Godhulayyagari, Sasha B. Ebrahimi, Devleena Samanta. (2023). Enhancing CRISPR/Cas systems with nanotechnology. [Online]. 50 Trends in Biotechnology. Last Updated: 12 July 2023. Available at: https://www.cell.com/trends/biotechnology/abstract/S0167-7799(23)00183-X [Accessed 18 February 2024]. [27] PubMed Central. (2022). Nanotechnology Powered CRISPR-Cas Systems for Point of Care Diagnosis and Therapeutic. [Online]. National Center for Biotechnology Information. Last Updated: 8 September 2022. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9484831/ [Accessed 18 February 2024]. [28] Mary Bellis. (n.a). History of the Smart Pill. [Online]. TheInventors. Last Updated: n.a. Available at: https://theinventors.org/library/inventors/bl_smart_pill.htm#:~:text=Jerome%20Schentag%2C%20professo [Accessed 18 February 2024]. [29] Shiza Malik, Khalid Muhammad, and Yasir Waheed. (2023). Emerging Applications of Nanotechnology in Healthcare and Medicine. [Online]. National Center for Biotechnology Information. Last Updated: 28 September 2023. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10536529/#:~:text=Nanotechnology%20is%20used%20to%20con [Accessed 18 February 2024].
- Improving Group Dynamics Using Psychology
Author: Steffi Kim In today’s highly interconnected world, collaboration and group work are becoming more prevalent than ever. But from brainstorming ideas to dividing up tasks, group work introduces a whole new dimension of challenges. Why do some groups fail, while others succeed? How do panels of experts sometimes reach such poor decisions? This article will briefly explore four common psychological phenomena, and provide solutions to boost group productivity and success in all spheres of life. Groupthink Groupthink is when a group exhibits bad decision-making due to intense pressure to conform and suppress dissenting opinions. Early on, the group becomes bent on a particular course of action, usually dictated by a leader, and ignores any facts or evidence suggesting a different approach. The group becomes insulated and closed off from outside opinions, creating echo chambers and a facade of unanimity. Groupthink can become dangerous when ethical considerations or broader consequences are ignored, and the group fails to acknowledge risks or create a contingency plan. When electing his 2008 Security Defense Team, the President of the United States, Barack Obama, specifically mentioned how he wanted to avoid groupthink. Groupthink stems from the well-intentioned desire to maintain harmony and agreement, and is more prevalent in groups with a strong “us against the world” mindset. To avoid groupthink, encouraging debate and avoiding rushed conclusions are key. Designating a “devil’s advocate” to raise concerns, or having the low-ranking members speak first so they don’t have to contradict the leader, are also ways to combat this phenomenon. The Abilene Paradox The Abilene Paradox occurs when a group collectively comes to a decision that none of the individual members actually agree with. The problem is not agreement—all the members agree on a point, yet the group itself moves in the opposite direction. For instance, a company leader proposes a new initiative because she thinks that that’s what everyone else wants to hear. The employees all secretly harbour doubts but speak highly of it to others because they believe that everyone else supports it. The result? A failed initiative that no one actually wanted, and a waste of time and resources. The Abilene Paradox is surprisingly common and can be due to action anxiety, where people imagine what great disasters could occur if they speak their mind, and use this as an excuse for inaction. Although similar to groupthink, the Abilene Paradox is fundamentally different. In groupthink, individuals self-censor and change their own beliefs to align with the group, whereas, in the Abilene Paradox, individuals are aware that they personally disagree. To avoid this trap, group members must consider the real risk of inaction and remaining silent. Group Polarisation Group Polarisation refers to the likelihood of a group taking more extreme stances than the initial opinions of its members. When surrounded by like-minded individuals, people are more likely to deepen their own commitment to a cause, in part due to a desire to be socially accepted. As group discussion goes on, hearing confirming arguments made by other group members reinforces and strengthens an individual’s own beliefs. Over time, as groups self-segregate and discuss amongst themselves, the differences between groups become more and more exaggerated, known as the Accentuation Effect. Liberal groups become more liberal, intellectual groups become more intellectual, and conservative groups become more conservative. Group polarisation has crucial political and societal implications, and groups with diverse opinions and healthy disagreement are less susceptible. Social Loafing In every group, there are usually a few people who don’t pull their own weight. Social Loafing is the psychological term used to describe individuals who put in less effort in group settings than they would on their own. Studies have shown that when asked to shout and clap as loud as they could, when told they were in a group, the noise each individual produced was three times less than the loudest sound when they were alone. The same pattern holds for tug-of-war, where the collective effort of teams was half the sum of the individual efforts. When individual effort is not measured, responsibility is dispersed amongst the group, with no individual held accountable. Monitoring individual progress and making individual actions distinguishable reduces social loafing. Social loafing also decreases if the group has a challenging goal, and incentives to work hard. Individuals heighten their own efforts when they view other team members as incompetent, and forming smaller groups where members feel indispensable is an effective way to reduce freeriding. References: “Groupthink.” Psychology Today. Accessed June 15, 2024. https://www.psychologytoday.com/us/basics/groupthink. Harvey, Jerry B. “The Abilene Paradox: The Management of Agreement.” Organizational Dynamics 17, no. 1 (June 1988): 17–43. https://doi.org/10.1016/0090-2616(88)90028-9. Hoffman, Riley. “Social Loafing in Psychology: Definition, Examples & Theory.” Simply Psychology, September 7, 2023. https://www.simplypsychology.org/social-loafing.html. Myers, David G. Social Psychology, 2012. https://diasmumpuni.files.wordpress.com/2018/02/david_g-_myers_social_psychology_10th_editionbookfi.pdf.
- Psychological Impact of Lockdown on Young people.
Author: Lekh Parekh Abstract We conducted a survey to study the psychological impact of the COVID-19 pandemic and the associated lockdown on the mental health of a section of young people in urban India. We present the findings from data collected from more than 200 young people within the 14 to18 year age group. We explored early signs of anxiety and depression. Hypothesis: The duration of the 2020 Covid pandemic and the subsequent lockdown has a direct association with negative psychological states in young people, specifically early symptoms of anxiety and depression. Introduction: On the 24th March, 2020, the Indian Prime Minister Narendra Modi announced India’s first nationwide three week lockdown due to the Covid -19 pandemic. Stretching from 25th March to 14th April 2020, the lockdown sparked a buzz of problems, criticism and hope. While multiple studies have focused on the welfare of deprived communities and on the physical and psychological impact on adults, our study specifically focuses on one group, i.e. 14 to 18 year old in a cosmopolitan city. A lack of attention towards this age group and the psychological well-being of this population could have a catastrophic impact on motivation, social interaction and general health for years to come. Methodology: The survey respondents were 14 to 18 year old individuals who were characterized by a higher socioeconomic status with access to computers and social media, attending private schools. This group is more likely to aspire to enroll in foreign Universities. They were all residents of Mumbai, a cosmopolitan city in India. The primary data was collected as part of a survey via a Google Form, with most questions designed to be answered on a linear scale of 1 to 5 (1 indicating “Not at all” and 5 indicating “Very”) to indicate severity of psychological symptoms. The form included a consent section to document consent from both participant and parent/guardian. Participants ticking the consent box were considered as having opted in to the survey. The respondents’ names and identifying details were not part of the Google Form in order to ensure anonymity of the respondents. The form was disseminated among different school social media groups. A total of 226 completed forms were returned. This collection of the data was during the active phase of lockdown. Results and Analysis: Anxiety: The following symptoms were considered: 1) Worry about the Future 2) Duration of Sleep3) Difficulty in concentration Worry about the Future: University applications are a significant consideration for high school students within this specific socio-economic group. Figure: 1 As seen in Figure 1, nearly 63.3 % of respondents were very worried (scores of 4 or 5) about their future college applications. High degree of worry about this aspect at a young age within the short period of three weeks of lockdown can be construed as an early sign of anxiety. Duration of Sleep: Figure: 2 Duration of sleep is an indicator of a rested body and a peaceful mind. We considered a duration of sleep of 4 hours a night or less as an inadequate duration of sleep. Almost 13% of the survey respondents reported an inadequate duration of sleep. Inadequate sleep is a sign of several psychological disturbances, including anxiety and depression. Difficulty in Concentration: Figure: 3 As seen in Fig 3, more than half of the respondents (56.2%) found it hard to concentrate for long periods of time. Depression: We considered the following as early symptoms of depression. 1) Helplessness 2) Irritability 3) Loss of Energy Helplessness Figure: 4 As seen in Figure 4, more than 55.8% of the respondents reported a sense of helplessness at scores of 4 and 5. This feeling of helplessness indicates the possibility of a depressed psychological state. Irritability: Figure: 5 Almost 72% of survey responders were feeling more irritated than before the pandemic which is a potential marker for the mood changes of depression. Loss of Energy: Figure: 6 As seen in Figure 6, more than 48% of survey responders scored 4 or 5 when asked to rate the extent of loss of energy. This is nearly half of the entire group. A prolonged period of inactivity due to loss of energy can have multiple effects ranging from deterioration of physical health to reduction of future career and life chances. Therefore, detecting this symptom in its early stages is vital in preventing long term impacts. Despite these effects, we also found that a large proportion of the young people were partaking in positive coping activities involving exercise/yoga (56.6%), socialising with friends (72.6%) and games and interactions with their family (45.6%). These could potentially be protective. Strengths and Weaknesses: Weaknesses: Many of the group of 226 young people were likely to have known other respondents to the survey which could be a source of bias. Unfortunately, the economically challenged section could not be studied. The study population consisted of English speaking pupils of private schools from relatively affluent families. None of them were school dropouts or working to earn a wage. This limits the generalisability of the results. The impact of the pandemic on economically deprived groups from the same age category is likely to be very different and much more severe but this conclusion cannot be drawn from our study. Strengths: Considering the fact that this study is psychological, most of the questions in the survey were designed to be on a Likert scale which provided information about the severity of the symptom in question. As the survey form was anonymised and avoided direct interview in person or on the phone, it minimised bias, for example, from societal bias that could have led to the need to present a positive view to an interviewer. As the survey form was likely to have been completed by the respondents on their own alone as part of their natural social media interaction, it is less likely to have introduced familial involvement and sources of bias. As the survey could be completed by the respondents online at a convenient time and in their own space, it is likely to have provided triggers for introspection and reflection. Potential for future research: A group with similar characteristics could be surveyed again to see differences in psychological state after the end of lockdown. Conclusion: A high proportion of young people surveyed in this study experienced symptoms of anxiety and depression during the brief period of three weeks of lockdown. An association was found between the period of lockdown and some symptoms of depression and anxiety. Our society has advanced to such an extent that young people are no longer content to spend time at home in isolation devoid of activity and social contact. The digital world has created the expectation for sensory or mental stimulation constantly. Restriction of normal social mobility and a sense of uncertainty about the future, exacerbated by the disruption of normal routines due to school closures as well as the inescapable 24/7 news coverage were unusual and hitherto unexpected and unprecedented challenges. This set of circumstances could be difficult to adapt to as an adult and even more challenging for a young person to process. It is in this context that the recent lockdown posed a risk to the overall wellbeing of the surveyed group. However, as the duration of this concerning psychological state is likely to have been short at the point of the survey, it is plausible to consider that a significant proportion of respondents would not progress into clinical anxiety and depression, especially if addressed in the early stage. Further, recovery and resumption to a semblance of normality after the end of nationwide lockdown are likely to have reversed the adverse psychological findings reported in this study. However, given that a vaccination programme for COVID-19 will take a couple of years to be implemented and pockets of resurgence and localised lockdowns in the future remain a possibility, our survey highlights the need for awareness and recognition of remedial measures. The measures described above could be efficacious in reducing the detrimental impact of lockdown on the mental health of young people. Bibliography: “Anxiety Disorders.” Mayo Clinic, Mayo Foundation for Medical Education and Research, 4 May 2018, www.mayoclinic.org/diseases-conditions/anxiety/symptoms-causes/syc-20350961. Schimelpfening, Nancy. “8 Ways to Improve Your Mood When Living With Depression.” Verywell Mind, 20 Mar. 2020, www.verywellmind.com/tips-for-living-with-depression-1066834. Melinda. “Coping with Depression.” HelpGuide.org, www.helpguide.org/articles/depression/coping-with-depression.htm. Holland, Kimberly. “20 Ways to Fight Depression.” Healthline, Healthline Media, 16 Oct. 2001, www.healthline.com/health/depression/how-to-fight-depression#today-vs.-tomorrow.
- Celebrating Ann Kiessling: The life and legacy of a pioneer scientist
Author: Maia Zaman Arpita Challenging assumptions and paving innovative routes, Ann Kiessling's courageous quest for knowledge has revolutionised our comprehension of reproductive biology and stem cell research. Ann Drue Anderson, who later became Ann Kiessling, was born in 1942 and raised in Oregon. She grew up as a science enthusiast; she pursued a nursing degree from Georgetown University and the University of Virginia. Further intensifying her journey towards her aim, she earned degrees in chemistry and organic chemistry from Central Washington University and a Ph.D. in biochemistry from Oregon State University. She showcases her immense interest in research to award the world with both groundbreaking and controversial research. One of her first and notable achievements includes the discovery of reverse transcriptase in normal human cells (1979) as a part of her post-doctoral degree. Her discoveries illuminated the link between viruses and cancer, challenging traditional beliefs and opening the door for more investigation into the influence of retrovirus sequences in human genes and their effects on human development and physiology. Currently, Kiessling is acknowledged as a trailblazer in both reproductive biology and stem cell research. Her researches have awarded us with valuable insights into the key areas within the discipline. Motivated by a desire to pursue research; that is overlooked by larger organisations due to political considerations, she established the Bedford Stem Cell Research Foundation in 1996. The main objective of this independent, non-profit institute was the application of stem cells to cure incurable diseases like HIV and spinal cord injuries. Throughout Kiessling’s career, she has occupied roles in various esteemed establishments, such as Oregon Health Sciences University and Harvard Medical School.Her immaculate research and innovative approaches has earned her numerous honours and awards for her pioneering research and creative approach to science, including the Alumni Achievement Award from the University Of Virginia School Of Nursing. The influence of her work persists, inspiring and educating scientists and researchers globally. Her impact on virology, reproductive biology, and stem cell research will endure, shaping our comprehension of human health and disease for years to come.
- Healthy Living and a More Sustainable Environment
Author: Audrey Zhang (NWIW Internship 2019) Abstract Numerous studies from a health or environmental standpoint have shown how the population can make changes to improve each field of studies’ respective issues. Combining results from several research studies, this paper shows how shifting one’s diet to include fewer animal products positively impacts both health and the environment. This dietary change not only decreases one’s carbon footprints, but also minimizes risks for some of the leading causes of death, in particular cholesterol-related diseases. Establishments can do their part by simply adding healthy Vending machines in Newcastle. Introduction In recent years, health and environmental awareness have risen to become a national, or even global, concern. In the U.S., staggering statistics of diet-related conditions confirm that change needs to happen at the individual level. Alongside health awareness, the ramifications of the Greenhouse Effect are also pushing the nation to be more environmentally conscious. Despite seemingly having very different end goals in mind, both environmentalists and nutritionists can join forces and promote an option that benefits everyone. By reducing the intake of animal foods, risks of life-threatening diseases and carbon footprints can be decreased. This will not only help improve health conditions but also lower carbon emissions on a national scale. Cholesterol and Its Effects on Health Cholesterol is an organic molecule created by the body or ingested from animal foods. Around 20% of dietary cholesterol is consumed, while the other 80% is made in the body. This waxy substance is synthesized in the liver and requires acetyl-CoA, an essential molecule for regulating fatty acid synthesis. The acetyl-CoA is passed through a variety of complex reactions that ultimately produce cholesterol. A good way to maintain a healthy diet is preparing juices with the new vitamix blender.Although often shunned as an unhealthy molecule, an adequate amount of cholesterol is essential to building cell membranes, making hormones, maintaining metabolism, and producing vitamins and bile acids. However, once blood cholesterol levels rise above 200mg/dL (200 milligrams of cholesterol for every deciliter of blood), the patient increases their risk for high blood cholesterol (diagnosed as hypercholesterolemia).[2] A rise in the body’s cholesterol level can be linked to either hereditary diseases or faulty dietary habits. It is estimated that almost 1 in 3 American adults have high cholesterol, but only 1 in 300 cases of high cholesterol is familial hypercholesterolemia, ruling out hereditary diseases as the primary issue[3]. As a result, faulty dietary habits account for the majority of cases. These habits are characterised by eating an excessive amount of animal foods, frequently exceeding the dietary recommendation of 300 milligrams per day. Excessive digestion of cholesterol results in high blood cholesterol levels, a source various potentially fatal diseases[2]. LDL (low-density lipoproteins) and HDL (high-density lipoproteins) are the two categories of cholesterol in the body. Dubbed as the “bad” cholesterol, high LDL levels result in a buildup of cholesterol on artery walls, blocking or narrowing certain vessels. Blockage or narrowing of vessels hardens the arteries, resulting in a medical condition called atherosclerosis. This puts the body at danger for life-threatening diseases such as coronary heart disease, strokes, Type 2 diabetes, and high blood pressure; some of America’s leading causes of death. On the other hand, HDL, the “good” cholesterol, lowers blood cholesterol levels by absorbing cholesterol in the bloodstream and carrying it back to the liver.[2] Figure 1: LDL and HDL Cholesterol in Arteries[4] Figure 2: Atherosclerosis in Arteries[5] Figure 3: Death Rates for the 10 Leading Causes of Death in the United States (2016,2017)[6] Risks for Cholesterol-Related Diseases Based on Diets The impact of ingested cholesterol from animal foods is most notable when comparing the health of those with different diets. These dietary habits range from meat-lovers to vegans and are categorised by the amount of food consumed from each food group. As shown in Figure 4, the amount of energy animals foods account for in a diet decreases from around 35% in a meat lover to 0% in a vegan. Between the two extreme diets (average to vegetarian), animal foods account for 10-25% of the energy expended. Furthermore, as dietary cholesterol ingested decreases with the number of animal foods consumed, eating fewer animal foods will therefore lower cholesterol levels in the body. For example, a vegan diet, which consists of no dietary cholesterol, has no energy expended on animal foods and is also shown to reduce cholesterol levels by 10-30% in comparison to an average diet.[7] Figure 4: Food Energy Distribution in Different Diets[7] As cholesterol levels drop in a less animal-food-intensive diet, risks for cholesterol-related diseases decrease concurrently as well[8,9]. A 2015 study released by the Atherosclerosis Risk in Communities (ARIC) followed the health of 11,000 adult males for a median of 22.7 years[8]. They found that the men who were the highest consumers of processed meat (i.e. jerky, bacon, sausage) had a 24% increased chance of stroke, a cholesterol-related disease. Similarly, the highest consumers of red meat (beef, lamb, pork) had a 41% increased chance of stroke. In total, the high consumers of both red and processed meat had a 62% higher chance of stroke than the average male. Since this group often consumes large volumes of food extremely high in dietary cholesterol, their bodies are more at risk for buildup and blockage of blood vessels. Therefore, it comes as no surprise that they are much more susceptible to strokes. On the other hand, an analysis in 2017 by the Icahn School of Medicine showed that a plant-based diet can decrease the risk of heart disease, another cholesterol-related condition[9], by 42%. Furthermore, those who maintained a healthy plant-based diet were 25% less likely to develop heart disease in the next twenty years compared to those who did have a healthy plant-based diet. Figure 5: Probability for Heart Disease of Stroke Based on Diet[8,9] Reducing Emissions with a Low-Cholesterol Diet Research shows that eating fewer animal foods also reduces carbon footprints, in addition to reducing the risk of cholesterol-related diseases. Foods containing dietary cholesterol are often the most carbon-intensive in comparison to plant-based foods due to the inefficient transformation of energy.[7] A 2018 study found that food production accounts for 26% of global GHG (greenhouse gas) emissions, and animal foods accounting for a staggering 31% of that number[10]. Figure 6: Carbon Emissions of Different Types of Food[11] Therefore, diets consisting of high proportions of animal foods, in particular, beef, inevitably have a high carbon footprint. On the contrary, switching out less carbon-intensive foods can greatly reduce carbon footprints. This requires no dramatic in daily life, as many alternatives, which provide similar nutrition for both less cholesterol and carbon emissions, are available. For example, in the meat lover’s diet in Figure 7, beef accounts for 1.5 of the 3.3 t CO2e. By simply cutting out the beef, one can reduce their carbon emissions by 1.5 tonnes[7]. In addition, these changes will have little impact on their protein intake. For example, in 100 grams of steak, there are 78 mg of cholesterol and 25 grams of protein. Yet, the same quantity of salmon has a similar 20 grams of protein, but only 55 mg of cholesterol.[12] Although both salmon and steak are great sources of protein, one is considerably more harmful to the environment than the other. Therefore, conscious decisions to eat fewer animal foods can both lower intake of dietary cholesterol and reduce carbon footprints, improving both the body’s health and the environment. Figure 7: Carbon Footprints (t CO2e/person) Based on Different Diets[8] Figure 8: Relation Between Risk for Disease and Food Carbon Emissions[7,8,9] Conclusion Blood cholesterol levels are essential in reducing risks for life-threatening diseases such as strokes, heart diseases, high blood pressure, and Type 2 diabetes. As cholesterol is most often consumed, the number of animal foods, which are carbon-intensive to produce, in a diet plays a large role in determining one’s blood cholesterol levels. Managing these levels is essential in controlling risks for life-threatening diseases such as strokes, heart diseases, high blood pressure, and Type 2 diabetes. Therefore, decreasing the consumption of animal foods, which contain cholesterol, can simultaneously minimize carbon footprints and risk for cholesterol-related disease, combating primary issues of both health and the environment. References “Cholesterol Metabolism,” Cholesterol metabolism (University of Waterloo, November 2015), http://watcut.uwaterloo.ca/webnotes/Metabolism/Cholesterol.html#:~:text=The%20liver%20synthesizes%20cholesterol%20from,6. “Cholesterol Levels: What You Need to Know.” MedlinePlus. U.S. National Library of Medicine, December 4, 2017. http://www.medlineplus.gov/cholesterollevelswhatyouneedtoknow.html. Collins, Sonya. “Inherited High Cholesterol: Genetic Conditions, Family History, and Unhealthy Habits.” WebMD. WebMD, March 22, 2016. http://www.webmd.com/cholesterol-management/features/high-cholesterol-genetics. “Cholesterol Your Ultimate Guide,” COBRACOR COACHING, October 13, 2019, https://cobracor.weebly.com/blog/cholesterol-your-ultimate-guide. “Coronary Artery Disease or Atherosclerosis,” Coronary Artery Disease Atherosclerosis – Cardiology – Highland Hospital – University of Rochester Medical Center (UR Medicine Cardiology , May 2019), https://www.urmc.rochester.edu/highland/departments-centers/cardiology/conditions/coronary-artery-disease.aspx. Murphy, Sherry L., Jiaquan Xu, Kenneth D. Kochanek, and Elizabeth Arias. “Products – Data Briefs – Number 328 – November 2018.” Centers for Disease Control and Prevention. Centers for Disease Control and Prevention, November 29, 2018. http://www.cdc.gov/nchs/products/databriefs/db328.htm. Wilson, Lindsay. “Shrinkthatfootprint.com,” shrinkthatfootprint.com (Shrink That Footprint, March 2013), http://www.shrinkthatfootprint.com/food-carbon-footprint-diet. Rapaport, Lisa. “Red Meat Linked to Increased Stroke Risk.” Reuters. Thomson Reuters, November 25, 2015. http://www.reuters.com/article/us-health-meat-stroke-risk/red-meat-linked-to-increased-stroke-risk-idUSKBN0TE2IA20151125. “Vegan & Plant-Based Diets and Heart Disease.” Cleveland HeartLab, Inc. Cleveland HeartLab, December 28, 2017. http://www.clevelandheartlab.com/blog/vegan-plant-based-diets-heart-disease/. Ritchie, Hannah. “Food Production Is Responsible for One-Quarter of the World\’s Greenhouse Gas Emissions.” Our World in Data. Our World in Data, November 6, 2019. https://ourworldindata.org/food-ghg-emissions. “Carbon Footprint Factsheet,” Carbon Footprint Factsheet | Center for Sustainable Systems (University of Michigan, July 2019), http://css.umich.edu/factsheets/carbon-footprint-factsheet. “Saturated Fat.” www.heart.org. American Heart Association, June 1, 2015. http://www.heart.org/en/healthy-living/healthy-eating/eat-smart/fats/saturated-fats. “Causes of Heart Failure.” www.heart.org. American Heart Association, May 31, 2017. http://www.heart.org/en/health-topics/heart-failure/causes-and-risks-for-heart-failure/causes-of-heart-failure. “Cholesterol.” Better Health Channel. Department of Health & Human Services, February 28, 2014. http://www.betterhealth.vic.gov.au/health/conditionsandtreatments/cholesterol. Ede, Georgia. “The Vegan Brain.” Psychology Today. Psychology Today, September 30, 2017. http://www.diagnosisdiet.com/diet/vegan-diets/. Haring, Bernhard, Jeffrey R. Misialek, Casey M. Rebholz, Natalia Petruski-Ivleva, Rebecca F. Gottesman, Thomas H. Mosley, and Alvaro Alonso. “Association of Dietary Protein Consumption With Incident Silent Cerebral Infarcts and Stroke.” Stroke 46, no. 12 (December 2015): 3443–50. https://doi.org/10.1161/strokeaha.115.010693. “High Cholesterol Facts.” Centers for Disease Control and Prevention. Centers for Disease Control and Prevention, February 6, 2019. http://www.cdc.gov/cholesterol/facts.htm. “High Cholesterol.” Mayo Clinic. Mayo Foundation for Medical Education and Research, July 13, 2019. http://www.mayoclinic.org/diseases-conditions/high-blood-cholesterol/symptoms-causes/syc-20350800. Morgan, Kate. “Story from Blue Cross Blue Shield Association: These Are the Top 10 Health Conditions Affecting Americans.” USA Today. Gannett Satellite Information Network, November 6, 2018. http://www.usatoday.com/story/sponsor-story/blue-cross-blue-shield-association/2018/10/24/these-top-10-health-conditions-affecting-americans/1674894002/. “Preventing High Cholesterol.” Centers for Disease Control and Prevention. Centers for Disease Control and Prevention, October 31, 2017. http://www.cdc.gov/cholesterol/prevention.htm. “September Is National Cholesterol Education Month.” Centers for Disease Control and Prevention. Centers for Disease Control and Prevention, November 25, 2013. http://www.cdc.gov/cholesterol/cholesterol_education_month.htm. About the Author Audrey is a junior in high school from California interested in Environmental Studies and Cancer Research. She plays both Varsity squash and tennis for her high school and loves reading and drawing in her free time.
- Factors Affecting Efficacy of Thiopurines for Crohn’s Disease Treatment
Author: Eesha Bethi Abstract Crohn’s Disease is a disease that affects the immune system in which a person\’s T cells will attack their GI tract and cause excess inflammation. Treatments for Crohn’s disease have advanced very little in the last decade, even though 3 million Americans are affected by it [1]. The main way Crohn’s is treated is through the use of prescribed thiopurine drugs. These drugs effectively treat many autoimmune disorders by suppressing the immune system, more specifically their capability to inflame certain areas of tissue. In this study, publicly available databases were used to determine what factors may affect the efficacy of thiopurine drugs in certain individuals with Crohn’s disease. First, thiopurine metabolization and how the compounds carried out their function as immunosuppressants were examined, revealing how external factors change the effectiveness of the drug. Then, the role of 6-MP and other thiopurine derivatives in the bile and caffeine pathways were studied, as well as enzymatic disorders that could prevent the drug from being metabolized. Thiopurine enzymes that were likely to have deficiencies resulting in large repercussions were TMPT, XO, IMPD, ITPA, and HPRT. These are important to know to determine someone’s responsiveness to the drug, drug dosage, and if the patient would need any other supplements to keep the pathway functioning. This study concluded that excessive use of compounds or enzymes in the caffeine or bile pathways would detract from the amount available for thiopurine metabolism. Moreover, the outcomes also showed that an enzyme deficiency will likely result in decreased levels of necessary compounds, like 6-MP, that are needed to fully metabolize the drug. Genetic screening is a beneficial solution to prevent enzyme deficiencies from reducing the effectiveness of the thiopurine treatment. This could be used to properly determine the dosage of thiopurines given to a patient so that the treatment is effective while also accounting for the amount of compounds or enzymes available for the body as a whole, including other pathways. Introduction Crohn’s disease is an inflammatory bowel disease (IBD) that causes chronic inflammation in the gastrointestinal (GI) tract. This disease can affect any part of the digestive tract from the mouth to the anus but inflammation is most common at the end of the small intestine (ileum) and beginning of the large intestine. The inflammation can permeate through the thickness of the entire wall. It can also appear in patches through the entire GI tract. It is most often diagnosed in adults between the age of 20 to 30, however, the disease can develop at any age [2]. The direct cause of Crohn’s is still unknown but most studies suggest genetic and environmental factors, like diet or stress, as possible factors. Harmless bacteria in the GI tract are mistaken by the immune system as pathogens and provokes an immune response which causes inflammation [1]. Common symptoms include persistent diarrhea, rectal bleeding, abdominal pain, loss of appetite, weight loss, fatigue, etc [3]. There are many gaps in Crohn’s research currently. There are 5 focus areas that remain under-researched, namely preclinical human IBD mechanisms, environmental triggers, novel technologies, precision medicine, and pragmatic clinical research [4]. Preclinical human IBD mechanisms include research of biochemical pathways, using humanized disease models, to yield novel and effective therapeutic interventions. The specific research gaps include triggers of immune responses, intestinal epithelial homeostasis and wound repair, age-specific pathophysiology, disease complications, heterogeneous response to treatments, and determination of disease location [5]. Precision medicine works to tailor treatments based on specific clinical and biological characteristics of individual patients to deliver optimal care. The main research gaps include understanding and predicting the natural history of IBD (disease susceptibility, activity, and behavior), predicting disease course and treatment response, and optimizing current and developing new molecular technologies [6]. Lastly, the main research gaps within pragmatic clinical research include understanding the epidemiology of IBD, accurate medication selection to increase treatment effectiveness, defining how clinicians are utilizing therapeutic drug monitoring, study of pain management, and understanding the health economics and healthcare resources utilization [7]. All of these questions still need to be answered, just proving how much more research needs to be done to effectively treat Crohn’s disease. Currently, Crohn’s disease is commonly treated with azathioprine or mercaptopurine, both of which are thiopurines. Thiopurines are immunosuppressive drugs that deactivate the area of T cells which cause inflammation. Once thiopurines enter the body they have to be metabolized to different compounds by different enzymes to create thioguanine [8]. Thioguanine will get incorporated into the DNA of the T cells during DNA replication in place of a guanine nucleotide. This changes the information that the cell receives and gets rid of the inflammation response. Objective The current issues regarding Crohn’s research is simply that it\’s not getting prioritized and that leaves significant gaps in how patients get treated. The objective is to identify how thiopurines aid in the treatment of Crohn’s Disease. Then, this paper will explain why some people with Crohn\’s disease respond better or worse to the common thiopurine treatment and how to remedy these issues. This will hopefully go on to help millions of people get access to proper treatments and dosages and incite further research to look for new treatments for Crohn’s. Methods/Results AZA/6-MP Pathway The KEGG (Kyoto Encyclopedia for Genes and Genomes) Database was used to identify the intermediates that the drugs azathioprine and mercaptopurine metabolize into before getting incorporated into the T cell DNA [9]. It showed that both compounds existed in the same pathway, “Drug Metabolism- Other Enzymes Reference Pathway” by searching each drug’s pathways individually at first. Figure 1 lists out all the compounds azathioprine is metabolized into and the enzymes that catalyze each reaction. By knowing this, it is easier to see how compounds in the pathway are incorporated in other processes. Figure 1: Metabolic pathway of AZA. Taken from the KEGG database, and annotated with additional information. The red indicates an enzyme and the black indicates a compound. [10] Azathioprine (AZA) is turned into 6-Mercaptopurine (6-MP), with the help of glutathione or other endogenous sulfhydryl-containing proteins. This reaction produces 1-Methyl- 4-nitro-imidazole. 6-MP is then further metabolized by three enzymes, thiopurine S-methyltransferase (TPMT), hypoxanthine phosphoribosyltransferase (HPRT), and xanthine oxidase (XO). TPMT adds a methyl group to 6-MP to create 6-methyl-mercaptopurine (6-MMP); XO transfers 6-MP to 6-thiouric acid (6-TUA) and HPRT metabolizes 6-MP into 6-thioinosine monophosphate (6-TIMP). The monophosphate kinase (MPK) enzyme adds a phosphate group to the 6-TIMP to create 6-thioinosine-diphosphate (6-TIDP). Then diphosphate kinase (DPK) transfers it into 6-thioinosine-triphosphate (6-TITP). The inosine triphosphate pyrophosphatase (ITPA) enzyme transfers some of the 6-TITP back into 6-TIMP. Meanwhile, TMPT adds a methyl group to 6-TIMP to create 6-methyl-thioinosine monophosphate (6-MTIMP). Then the MPK and DPK enzymes transfer 6-MTIMP into 6-methyl thioinosine diphosphate (6-MTIDP) and 6-methyl thioinosine triphosphate (6-MTITP). 6-MMP is transferred to 6-MTIMP by HPRT as well. 6-TIMP is also metabolized by inosine monophosphate dehydrogenase (IMPD) to 6-thioxanthosine monophosphate (6-TXMP). 6-TXMP is then transferred to 6-thioguanine nucleotides: 6-TGMP, 6-TGDP, 6-TGTP. Guanosine monophosphate synthetase (GMPS) turns 6-TXMP into 6-thioguanine monophosphate (6-TGMP). 6-TGMP is also metabolized into 6-thioguanine diphosphate (6-TGDP) and 6-thioguanine triphosphate (6-TGTP) by MPK and DPK. Additionally, TPMT adds a methyl group to 6-TGMP to create 6-methyl-thioguanine monophosphate (6-MTGMP) as a byproduct. HPRT converts 6-thioguanine (6-TG) into 6-thioguanine nucleotides while TPMT turns 6-TG into 6-methyl thioguanine (6-MTG) and XO converts 6-TG to 6-thiouric acid (6-TUA). [11] Figure 2: The process by which 6-TGN, an eventual derivative of thiopurine, causes apoptosis in T-cells. Figure 3: The process by which the 6-TGTP nucleotide, created from thiopurine metabolism, binds to Rac1 and blocks anti-apoptotic protein and prevents inflammation. The thioguanine nucleotides (6-TGN) created from the drug are incorporated in to the T cell DNA during DNA replication in place of a regular guanine nucleotide. This enacts the mismatch repair system (MMR) to correct the errors in the DNA. However, in this case, the MMR system will work incompletely and results in apoptosis of the T cell. (Figure 2) When normally functioning, guanine triphosphate binds to the Rac1 gene and the Vav1 protein and the guanine nucleotide exchange factor catalyzes the transformation of Rac1 between GTP (guanine triphosphate) and GDP (guanine diphosphate), in which GTP is active and GDP is inactive. However, when thiopurine drugs are input into the body, thioguanine triphosphate (6-TGTP), one of the three thioguanine nucleotides, binds to Rac1 in place of GTP. GAP proteins then convert TGTP-bound Rac1 to 6-TGDP-bound Rac1 and Vav1 becomes unable to catalyze the exchange between the two, resulting in the build-up of inactive 6-TGDP-bound Rac1 protein. This decreases inflammation in two ways. First is by apoptosis. Normally, the Vav1 catalyzed activation of the Rac1 gene results in an increased expression of the anti-apoptotic protein Bcl-x, however, the build-up of 6-TGDP-bound Rac1 protein blocks Rac1and prevents Bcl-x formation. Without an anti-apoptotic protein, apoptosis will occur. Second is preventing complex formation between T cells and antigen-presenting cells (APC). Activated Rac1 removes a phosphate group from ezrin-radixin moesin (ERM) proteins in T cells, which leaves room for APC conjugation with a T cell. This process gets reversed when thiopurines prevent activation. If T cells can’t bind to APCs, they can’t enact an immune response, an inflammatory response, to foreign substances because APCs allow T cells to recognize foregin substances. (Figure 3) [12] Related Compounds/Pathways After doing a literature search for the thiopurine mechanism of action of Crohn’s disease, the KEGG database was used to identify what other pathways thiopurine intermediates are involved with, under the hypothesis that related pathways may affect drug efficacy or symptoms. 6-MP is also involved in the bile secretion pathway, which was discovered by looking through the substances involved in the highlighted region. (Figure 4) Figure 4: 6-MP involvement in bile secretion pathway, indicated by compounds/acids highlighted in red. Taken from KEGG Database. [13] In Figure 4, the red circles marked on the right indicate the location of 6-MP in the pathway. After 6-MP is produced in the thiopurine pathway, some enters the liver to aid in the creation of oleic acid (OA). OA is an antineoplastic, which means that it prevents the abnormal growth of cells to prevent tumors in the liver. This gives the OA some anticancer properties to help protect the liver. However, all antineoplastics have some level of hepatotoxicity, which means that oleic acid is somewhat toxic to the liver. This is why OA must be secreted through urinary output soon after it’s made. If everything functions correctly, the more 6-MP produced, the more oleic acid produced, which could either be excreted or become toxic to the liver. [14] Looking at the properties of the xanthine oxidase (XO) enzyme in the KEGG Database, it was discovered that XO is also used in the caffeine metabolism pathway (Figure 5). Highlighted in red in Figure 5, XO’s role is to create methyluric acid and dimethyluric acid as more caffeine enters the body. Methyluric acid is a major metabolite of caffeine with antioxidant activity that protects cells from being damaged by unstable atoms [15]. Dimethyluric acid has a role as a human xenobiotic metabolite [16], which means it transforms less polar foreign substances into more polar compounds that can be excreted more easily [17]. Figure 5: Xanthine oxidase (XO) involvement in caffeine pathway, indicated by enzymes highlighted in red. Taken from KEGG Database. [18] Genes/Proteins + Mutations In order to identify the diseases or deficiencies that could affect the enzymes involved in the thiopurine pathway, the KEGG database was searched for the properties of each involved in the thiopurine metabolism pathways. The possible diseases and genetic variations that may affect each enzyme were then analyzed.Various additional databases such as Gene Cards were also used to validate these properties, and to understand the possible genetic variations. Each enzyme also has certain mutations that would affect its function in the thiopurine pathway. Discussion Significance of Related Pathways The possible relationship between thiopurine metabolism and the pathways in which thiopurine intermediates existed were examined, primarily the bile secretion pathway and caffeine pathway. The bile secretion pathway would be affected by a change in 6-MP levels. An increase in 6-MP levels may cause hypertoxicity in the liver because of an increase in antineoplastics. This may cause the liver to fail or a plethora of other hepatic diseases. This increase in 6-MP levels is likely if the patient has TMPT deficiency or Xanthinuria, which is why making sure the Crohn’s patient gets the right dosage is crucial. It is confirmed that a TPMT deficiency can cause an increase in toxicity when treated with thiopurines [34]. If 6-MP levels decrease, there may be an increase likelihood of developing a hepatic tumor, however, this is unlikely because there are a lot of other antineoplastics that are also in the the liver, which means they could serve in place of 6-MP in the instance that the levels decrease. This is why there is always a delicate balance of how much of the drug to dose a patient. As seen in the caffeine diagram above, the XO enzyme is used to make different acids such as methyluric acid and dime thyluric acid. One conclusion drawn from this diagram is that the more caffeine that enters the body, the more XO needed to create these enzymes and keep the pathway functioning. Therefore, in the instance that the body did not respond by overexpressing the enzyme, an increased caffeine consumption in a patient with Crohn’s Disease would reduce the effectiveness of the medication because less XO would be available to perform its function in the thiopurine pathway. This may lead to a build up in 6-MP which would either create build ups of different compounds or, if the enzymes were upregulated, would create more thioguanine nucleotides, which would speed up the process for dismantling the T cells. Certain genetic variants might also have an effect on the caffeine and thiopurine pathways, especially if the patient has Crohn’s. For example, if a patient has xanthinuria, they may want to reduce caffeine consumption because they would already have a deficiency and they soul conserve the XO enzyme to work in the thiopurine pathway. It has been proven that a complete XO deficiency can cause severe toxicity with a full dose of AZA [35]. Another example would be if the patient has TMPT deficiency, they would already be producing an oversupply of 6-MP and would need more of the XO enzyme, which is another reason to reduce caffeine consumption. Genetic Screening Because of the many common mutations that may affect thiopurine metabolism, screening for the mutations listed above could dramatically improve dosing of azathioprine/mercaptopurine for Crohn’s disease patients. The aforementioned deficiencies could either increase or decrease the production of thioguanine, which means either more or less of the drug needs to be administered to maximize the effectiveness of the drug. Some candidate screening technologies include microarrays, polymerase chain reactions (PCRs), and DNA sequencing. Microarrays look at all of the chromosomes at once. They contain thousands of short, single stranded DNA that is attached to a chip. The human DNA is then compared to the normal microarray to determine any duplications, deletions, etc. PCRs make copies of numerous short DNA sections from a small sample of genetic material that can later be analyzed and sequenced to determine variants. DNA sequencing will determine the order of the base pairs that make up DNA. This allows scientists to look for a specific variant or mutation to find a disorder. [36] To improve treatment of Crohn’s disease, these tests should always be used to detect the certain mutations/variants listed above to let a physician know if there is an enzyme deficiency before administering a drug so as to not do more harm to the patient. For example, if the patient had an IMPD, ITPA, or HPRT deficiency that meant a lot of the drug would not get metabolized into thioguanine, the physician would either need to change the drug dosage or if possibly supplement the enzymes to make the pathway efficient. In fact, multiple groups with an ITPA variation correlates with increased toxicity to thiopurines [28]. Screening gives the doctor or person prescribing the dosage of thiopurine drugs a much clearer picture of how much to give the patient to ensure that there aren\’t any ramifications, within the thiopurine pathway or another pathway it connects to, that they would have missed otherwise. Future research could go on to find different, more accurate screening methods to ensure the safety of patients. There is also a need for more research into how compound or enzyme levels are affected by other pathways and mutations. A lot has been hypothesized but more quantifiable evidence could be found. For example, a certain percentage of enzyme deficiency would lead to a certain percentage of 6-MP increase. This would create a much more streamlined system of dosing patients with Crohn’s. Conclusion Through literature searches and using a collection of databases, several possible mechanisms that may cause Crohn’s disease patients to have different responses to thiopurines have been identified. The bile secretion pathway and the caffeine metabolism pathway have a complex relationship with thiopurine metabolism that potentially affect the biochemical rates of reactions in the thiopurine metabolism pathway. Additionally, many common genetic variants play a potential role in thiopurine metabolism and response. Small molecules could be used to better regulate the bile secretion and caffeine metabolism pathways in the presence of thiopurine, and genetic screening to improve dosing for patients with genetic variants. Of course, both of these areas will need to be researched deeply before such therapies could be administered to a patient, but this research can help start the conversation about how to improve personalized medicine for Crohn’s disease patients. References Crohn\’s & Colitis Foundation. n.d. “Causes of Crohn’s Disease.” Crohn\’s & Colitis Foundation. www.crohnscolitisfoundation.org/what-is-crohns-disease/causes. Crohn\’s & Colitis Foundation. n.d. “Overview of Crohn\’s Disease.” Crohn\’s & Colitis Foundation. www.crohnscolitisfoundation.org/what-is-crohns-disease/overview. Crohn\’s & Colitis Foundation. n.d. “Signs and Symptoms of Crohn’s Disease.” Crohn\’s & Colitis Foundation. www.crohnscolitisfoundation.org/what-is-crohns-disease/symptoms. Crohn\’s & Colitis Foundation. n.d. “Current Research Priorities.” Crohn\’s & Colitis Foundation. www.crohnscolitisfoundation.org/research/challenges-ibd. Crohn\’s & Colitis Foundation. n.d. “Preclinical Human IBD Mechanisms.” Crohn\’s & Colitis Foundation. www.crohnscolitisfoundation.org/research/challenges-ibd/preclinical-human-ibd-mechanisms. Crohn\’s & Colitis Foundation. n.d. “Precision Medicine.” Crohn\’s & Colitis Foundation. www.crohnscolitisfoundation.org/research/challenges-ibd/precision-medicine. Crohn\’s & Colitis Foundation. n.d. “Pragmatic Clinical Research.” Crohn\’s & Colitis Foundation. www.crohnscolitisfoundation.org/research/challenges-ibd/pragmatic-clinical-research. Neurath, Markus. 2010. “Thiopurines in IBD: What Is Their Mechanism of Action?” National Center for Biotechnology Information. www.ncbi.nlm.nih.gov/pmc/articles/PMC2933759/. Kanehisa, Minoru, and Susumu Goto. \”KEGG: kyoto encyclopedia of genes and genomes.\” Nucleic acids research 28, no. 1 (2000): 27-30. Kanehisa Laboratories. 2019. “Drug Metabolism – Other Enzymes – Reference Pathway.” Kyoto Encyclopedia for Genes and Genomes. https://www.kegg.jp/kegg-bin/highlight_pathway?scale=1.0&map=map00983&keyword=thiopurine. Derijks, L. J. J., L. P. L. Gilissen, P. M. Hooymans, and D. W. Hommes. 2006. “Review Article: Thiopurines in Inflammatory Bowel Disease.” Alimentary Pharmacology & Therapeutics, (05), 717-718. dpl6hyzg28thp.cloudfront.net/media/thiopurines_pharmacokinetics.pdf. de Boer, Nanne K., Laurent Peyrin-Biroulet, Bindia Jharap, Jeremy D. Sanderson, Berrie Meijer, Imke Atreya, Murray L. Barclay, et al. 2017. “Thiopurines in Inflammatory Bowel Disease: New Findings and Perspectives.” Journal of Crohn\’s and Colitis, (12), 611-612. dpl6hyzg28thp.cloudfront.net/media/thiopurines_cell_signalling.pdf. Kanehisa Laboratories. 2020. “Bile Secretion – Reference Pathway.” Kyoto Encyclopedia for Genes and Genomes. https://www.kegg.jp/kegg-bin/show_pathway?map04976+D04931. National Institute of Diabetes and Digestive and Kidney Diseases. 2019. “Antineoplastic Agents.” In LiverTox: Clinical and Research Information on Drug-Induced Liver Injury [Internet]. Bethesda, MD: National Center for Biotechnology Information. www.ncbi.nlm.nih.gov/books/NBK548022/. “1-Methyluric Acid M6885.” n.d. Millipore Sigma. www.sigmaaldrich.com/catalog/product/sigma/m6885?lang=en. National Center for Biotechnology Information. n.d. “1,7-Dimethyluric acid.” PubChem National Library of Medicine. https://pubchem.ncbi.nlm.nih.gov/compound/1_7-Dimethyluric-acid. McGinnity, D.F. 2017. “Xenobiotic Metabolism.” Xenobiotic Metabolism – an Overview | ScienceDirect Topics. www.sciencedirect.com/topics/medicine-and-dentistry/xenobiotic-metabolism. Kanehisa Laboratories. 2018. “Caffeine Metabolism.” Kyoto Encyclopedia for Genes and Genomes. www.kegg.jp/kegg-bin/show_pathway?ko00232+K00106. Wang, Liewei, Linda Pelleymounter, Richard Weinshilboum, Julie A. Johnson, Joan M. Hebert, Russ B. Altman, and Teri E. Klein. 2010. “Very important pharmacogene summary: thiopurine S-methyltransferase.” National Center for Biotechnology Information. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3086840/. Genecards Human Gene Database. n.d. “TPMT Gene (Protein Coding).” GeneCards. https://www.genecards.org/cgi-bin/carddisp.pl?gene=TPMT. “DISEASE: Thiopurine S-Methyltransferase Deficiency (TPMT Deficiency).” n.d. Kyoto Encyclopedia for Genes and Genomes. www.kegg.jp/dbget-bin/www_bget?ds%3AH00964. Peretz, Hava, Michael Korostishevsky, David M. Steinberg, Mustafa Kabha, Sali Usher, Irit Krause, Hannah Shalev, Daniel Landau, and David Levartovsky. 2019. “An Ancestral Variant Causing Type I Xanthinuria in Turkmen and Arab Families Is Predicted to Prevail in the Afro-Asian Stone-Forming Belt.” National Center for Biotechnology Information. www.ncbi.nlm.nih.gov/pmc/articles/PMC7012738/. National Institutes of Health. 2020. “Hereditary Xanthinuria – Genetics Home Reference – NIH.” MedlinePlus. ghr.nlm.nih.gov/condition/hereditary-xanthinuria. “DISEASE: Xanthinuria.” n.d. Kyoto Encyclopedia for Genes and Genomes. www.kegg.jp/dbget-bin/www_bget?ds%3AH00192. Weizmann Institute of Science. n.d. “HPRT1 Gene.” GeneCards. https://www.genecards.org/cgi-bin/carddisp.pl?gene=HPRT1&keywords=hprt. Nanagiri, Apoorva. 2020. “Lesch Nyhan Syndrome.” National Center for Biotechnology Information. www.ncbi.nlm.nih.gov/books/NBK556079/. “DISEASE: Lesch-Nyhan Syndrome.” n.d. Kyoto Encyclopedia for Genes and Genomes. www.kegg.jp/dbget-bin/www_bget?ds%3AH00194. Burgis, Nicholas E. 2016. “A disease spectrum for ITPA variation: advances in biochemical and clinical research.” Journal of Biomedical Science. https://jbiomedsci.biomedcentral.com/articles/10.1186/s12929-016-0291-y#:~:text=ITPA%20mutation%20causes%20infantile%20encephalopathy&text=recently%20identified%20recessive%20ITPA%20mutation,to%20a%20unique%20MRI%20pattern. “DISEASE: Early Infantile Epileptic Encephalopathy.” n.d. Kyoto Encyclopedia for Genes and Genomes. www.kegg.jp/dbget-bin/www_bget?ds%3AH00606. Weizmann Institute of Science. n.d. “IMPDH1 Gene.” GeneCards. https://www.genecards.org/cgi-bin/carddisp.pl?gene=IMPDH1. Sullivan, Lori S., Sara J. Bowne, David G. Birch, Dianna Hughbanks-Wheaton, John R. Heckenlively, Richard A. Lewis, Charles A. Garcia, et al. 2006. “Prevalence of Disease-Causing Mutations in Families with Autosomal Dominant Retinitis Pigmentosa: A Screen of Known Genes in 200 Families.” In Investigative Ophthalmology & Visual Science, 3052-3064. 7th ed. Vol. 47. N.p.: The Association for Research in Vision and Ophthalmology. https://iovs.arvojournals.org/article.aspx?articleid=2125683. “DISEASE: Retinitis Pigmentosa.” n.d. Kyoto Encyclopedia for Genes and Genomes. www.kegg.jp/dbget-bin/www_bget?ds%3AH00527. “DISEASE: Leber Congenital Amaurosis.” n.d. Kyoto Encyclopedia for Genes and Genomes. www.kegg.jp/dbget-bin/www_bget?ds%3AH00837. Azimi, F., M. Jafariyan, S. Khatami, Y. Mortazavi, and M. Azad. 2014. “Assessment of Thiopurine–based drugs according to Thiopurine S-methyltransferase genotype in patients with Acute Lymphoblastic Leukemia.” National Center for Biotechnology Information. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3980020/. Ansari, A., A. De Sica, M. Smith, K. Gilshenan, L. Fairbanks, A. Marinaki, J. Sanderson, and J. Duley. 2008. “Influence of xanthine oxidase on thiopurine metabolism in Crohn’s disease.” Wiley Online Library. https://onlinelibrary.wiley.com/doi/full/10.1111/j.1365-2036.2008.03768.x. American Association for Clinical Chemistry. 2019. “Genetic Testing Techniques.” Lab Tests Online. labtestsonline.org/genetic-testing-techniques. Common Mutation Helpful Information A- adenine G- guanine C- cytosine T- thymine Codon- 3 nucleotides that form a specific genetic code in DNA or RNA that codes for an amino acid, which will go on to create proteins A mutation in which a nucleotide is replaced, added, or deleted can result in a change in its genetic code. How to Read Mutation Notation: ex) 169G>C (Ala57Pro) Codon 169 originally had G (guanine) but a mutation replaced G with C (cytosine). This changes the amino acid created from Ala to Pro. ex) c.452G > A (p.Trp151Stop) Codon 452 originally had G (guanine) but a mutation replaced G with A (adenine). This changes the amino acid created from Trp to an ending sequence for the chain of amino acids that would have been created. About the author Eesha Bethi is currently a junior at Carroll Senior High School in Southlake, Texas. She has always been interested in STEM fields, specifically molecular biology, and hopes to pursue a career in medicine in the future. Mixed with her interest in autoimmune diseases, this started the basis of her research. She also enjoys humanitarian volunteer activities and public speaking.
- What Are Fractals and Could They Be the Key to Creating an Automated Cancer Diagnosis System?
Author: Thomas Higman Abstract Cancerous cells can be detected in many ways, one of which uses fractal dimension (a number that describes the complexity of a shape). We analyse the “box counting” method of finding the fractal dimension of an object and the application that this has in finding the fractal dimension of an animal cell. We review research where the fractal dimensions of cancerous and healthy cells are recorded by automated means to demonstrate how effective an automated cancer diagnosis system might be. Fractal dimension is not yet the most effective way to detect cancer, as there is an overlap in fractal dimension values that both healthy and cancerous cells could have. With further research into these processes, this type of diagnosis system could detect cancer earlier. Introduction Since the popularization of fractals by the French polymath Benoit B. Mandelbrot in the mid 1970s, fractal research has played an increasingly important role in many fields of modern science. Fractals occur everywhere, from fractal patterns in art to pathological functions in mathematics. With the use of fractal dimension, a number that describes the complexity of a shape, many complex objects such as mountains or clouds can be characterized. Fractal research has been used for many things, ranging from digital image compression to the prediction of zones of aftershock after an earthquake has hit [1] [2]. A growing application of fractals is in the autonomous detecting of cancerous cells from a sample, which could be pivotal in reducing the number of lives lost to cancer. Most notably, in 2016, Chan and Tuszynski were able to achieve an average accuracy of 0.964 in using a computer system to detect whether a cell is cancerous or healthy from its fractal dimension [3]. We explore the possibility of using fractals to detect cancer earlier and automatically in order to save more lives. Fractals and Fractal Dimension Fractals are geometrical shapes that have an infinite level of detail, just like a picture that produces ever more detail as you zoom into it. As Pickover notes, “The detail continues on for many magnifications – like an endless nesting of Russian dolls within dolls”. Although it is hard to give an exact definition of a fractal without the use of advanced mathematics, there are several basic requirements for an object to be fractal These are: An infinite level of detail Self-similarity A repeated construction A non-integer dimension Self-similarity refers to the shape in question being made out of smaller – perhaps rotated – parts of itself, as can be seen in Figure 1. Self-similarity creates infinitely detailed structures, as the fractal is repeatedly self-similar on an infinite number of scales. A repeated construction refers to the fractal being generated by a repeated rule; Figure 1 was generated by splitting a triangle into 4 equilateral triangles and then removing the middle triangle. When this rule is iterated infinitely, the fractal called the Sierpinski gasket is produced. In Figure 1, the rule was repeated 5 times, which is enough to visualize the Sierpinski gasket at print resolution. In mathematics, this simple rule is called a function and the repeating of it is called iteration. To explain non-integer dimensions consider the following analogy. If a one-dimensional object is a straight line and a two-dimensional object is (for example) a square, then an object with a dimension between the first and second dimension is an object that is in-between these two shapes. For example, imagine a line that zigzags and curves so intricately that it partially fills the plane. This would have a non-integer (otherwise known as fractal) dimension of between 1 and 2, perhaps 1.543 to give an example. It is worth noting however that, due to the natural limitation of our universe, there can be no infinitely self-similar physical fractals, only ever models that are accurate to a limited range of scales. This is because our size range is limited by the size of the smallest subatomic particles and, even though they may be extremely small, these objects are not infinite in detail. The dimension of a fractal is a key number that describes the complexity and roughness of a physical or geometrical object, meaning it can be applied to real life objects and shapes in a 2D plane. While geometric fractals are made by iterating a function, physical fractals can be produced by various processes such as diffusion-limited aggregation (where particles move with random motion before attaching to a main structure, which is how coral grows) or tumour growth. There are different ways to find the fractal dimension but the simplest is by using the box counting method. By taking a shape and superimposing grids with a decreasing grid box side length over the shape, the number of grid boxes that contain (or are superimposed over) the shape can be recorded. This is called the box-counting method. An example for the superposition of a grid onto a fractal is shown in Figure 2 with the Koch curve (or Koch Snowflake) and the values are shown in Table 1. As we can see from Table 1, as the scale increases, more detail is included as more boxes contain a part of the Koch curve. This data can now be used with the following formula to calculate the fractal dimension of the Koch curve: given that r is the grid box side-length (or scale) and N(r) is the number of boxes that contain part of the Koch curve, the fractal dimension, d, is approximated by- where ln denotes logarithm to the base e (the natural logarithm). The constant e is known as Euler’s number and is approximately equal to 2.718. A logarithm is a function where, if we choose a base and an input number, the output number is the exponent (or power) the base needs in order to become the input number. That is to say, if then . Table 1 shows the computed approximations to the fractal dimension of the Koch Curve by using the values obtained from Figure 2, and some extra data taken from Fractals: A Very Short Introduction, in the equation shown above [6]. Once again, as the scale of measurement increases, the number of boxes containing a part of the Koch curve increases nonlinearly. In regards to the fractal dimension values obtained, there is a period of stabilization for the first few values obtained from different scales. This means simply that the values do not give us an accurate idea of the fractal dimension, as the scale isn’t small enough. However, as the scale of grid box side length decreases, the approximation begins to converge towards the dimension 1.260. As the fractal dimension refers to the unusual scaling behavior of an object, the smaller the scale used is, the more accurate the calculated fractal dimension is. This is true for all geometric fractals, but for physical fractals this can only be the case down to a certain scale (after which the physical object loses its fractal properties). The fractal dimension of the Koch curve is actually log(4)/ log(3) (as a direct result of the Koch curve being made from 4 pieces of itself at the scale 1/3) which is 1.2618. . . [6]. Table 1: Approximate fractal dimension of the Koch curve from the box counting method. Results of Cancer Detection Using Fractal Dimension Cancer is a disease that affects one in two British people during their lifetime [7]. Among the large number of scientists researching cancer, some have turned to fractals to aid diagnosis. Healthy cells usually look smooth under a microscope whereas malignant tumour (cancer) cells usually have a rough and abnormal shape [8]. Figure 3 shows an example of the structure of cancerous and healthy cells. The shapes of these cells cannot be described by using Euclidean geometry, but a cell’s fractal dimension captures some of its characteristics in a single number. It is then possible to tell whether a cell is cancerous or not by comparing its fractal dimension with that of a healthy cell [9]. In 2000, in one of the first papers to look at the link between fractal dimension and cancer, Baish and Jain write that “Fractal analysis shows its greatest promise as an objective measure of seemingly random structures and as a tool for examining the mechanistic origins of pathological form” [11]. This paper mostly looked at the comparison between the fractal dimension of tumour vessels and blood vessels, such as arteries and veins, and it demonstrated the great potential that fractal dimension has for characterizing cancer cells. Soon after, Bauer and Mackenzie gave one of the first examples where computer programs and algorithms were used to find the fractal dimension of cells [12]. They found that when the box-counting method was used to find the fractal dimension of a variety of cells from healthy patients and patients with hairy-cell leukaemia, none of the healthy cells had a fractal dimension (d) of more than 1.28 but a large percentage of the cancer cells had d > 1.28. This is shown in the histogram in Figure 4. This study proved the capability of fractal dimension to distinguish cancer cells from healthy cells. It is worth noting that a large proportion of the cancer cells also had d < 1.28, but when a sufficiently large amount of data is compiled it is clear that the healthy cells have a smaller fractal dimension on average. Fifteen years later, in 2016, Chan and Tuszynski looked at automatically finding the fractal dimension of cells [3]. They used a large database of breast cancer histopathology images (microscopic images of a tissue for studying a disease, including both benign and malignant tumour cells, to test the accuracy of a computer prediction system at detecting whether a cell was cancerous or not. The system used a support vector machine algorithm, a basic form of machine learning, in order to find a cut-off fractional dimension for cancerous cells. The system was 0.979 accurate at classifying a cell at 40x magnification, which means that it formed the correct diagnosis about 98 times in 100. However, it was only 0.556 accurate at predicting what subtype of breast cancer the patient had (if any), which in reality is no better than a coin toss. It is interesting to note that magnifications above 40x yielded worse results, perhaps as a result of the natural limitations objects have in real life (they don’t have an infinite structure). This data shows great potential for the use of machines to classify benign and malignant tumour cells, particularly as, even when the size of the training set was reduced, there was an average accuracy of 0.964. However, this data does have some limitations: all of the images used were of a consistent standard (as they were all taken from an online database), which helped to produce reliable results but doesn’t give an accurate representation of the accuracy of results when different dyes and methods of obtaining images of the cells are used. Moreover, only breast cells were used in this research. Chan and Tuszynski concluded by stating, “At the very least, this could be used to assist in the diagnostic procedures and reduce the time burden on pathologists”. Although oncologists are finding different ways of detecting cancer cells and classifying them, there are many benefits to implementing a fractal dimension-based system. By using machine learning and fractal dimension, the issue of human subjectivity is mostly eliminated (perhaps only being needed in less clear circumstances). This could lead to a standardisation in the classification of cancer cells and could help scientists to understand more about the way cancer grows. For example, Chan and Tuszynski speculate that there could be a “correlation between the fractal dimension of the pathology slide and a clinical outcome measure such as 5 year survival of the patient” [3]. Automatic machine prediction could also speed up the process of cancer detection and could mean that more areas of the body could be tested for cancer. However, there are possible safety concerns and problems with using machine prediction. The machine could fail to correctly predict whether cancer cells are present in a patient, which could lead to the late detection of cancer or a false sense of safety in the patient. This could be resolved by requiring cross-examination of results with an oncologist, a doctor who treats cancer patients, but this could remove any speed benefits that a machine prediction could bring. Furthermore, so far only limited types of cancer cells have been tested with this technology and only in a retrospective analysis, not a blind experiment, meaning that it is unclear how successful this technology could be at detecting all cancer cells in a real environment. This method also requires the taking of biopsies of patients – which may not always be desirable since it involves the collection of cell samples using surgical tools and may lead to infections – and the digitization of each tissue sample, but in most cases biopsies would be taken regardless. If the same dye is not used in every biopsy to stain the cell the results could be unreliably recorded. In another paper by Tambasco, it was found that the higher values of fractal dimension were associated with lower tumour malignancy, which is the opposite of the results in Chan and Tuszynski’s study [3] [13]. The two studies used different dyes; in the latter the dye showed more extracellular detail, which may be why the fractal dimension results were different. It is clear that much further research and testing needs to be done on this topic before it could begin to be trialed for implementation. Conclusion Since the surge in popularity of fractals in the 1970s, scientists have found new ways to describe and explain a whole manner of previously “pathological” problems and “mathematical monsters” [14]. It is clear that fractals will play an important role in future science because of their unique way of describing nature and its systems. Wheeler, an American theoretical physicist, stated that, “no one will be considered scientifically literate tomorrow who is not familiar with fractals” [16]. The threat of cancer has been an issue that has impacted most people, whether indirectly or directly. There are different ways of automatically detecting cancer, such as automatic segmentation of cell images to classify asymmetric abnormalities [16] [17]. However the use of fractal dimension to detect cancer cells is becoming more useful because it can give an estimate of the malignancy of a cancer cell. Fractal dimension might be most effective when working alongside other diagnosis techniques and systems, particularly ones based on machine learning or deep learning. Although there is still much research to be done, it seems that fractal dimension based cancer diagnosis systems could be pivotal in increasing the speed of cancer detection, ultimately saving many lives. References Barnsley, Michael F., and Lyman P. Hurd. 1993. Fractal Image Compression. Natick, Massachusetts, United States: A K Peters Ltd.. Caneva, Alexander, and Vladimir Smirnov. 2005. Using the fractal dimension of earthquake distributions and the slope of the recurrence curve to forecast earthquakes in Colombia. Earth Sciences Research Journal, 8, p. 3–9. Chan, Alan, and Jack A. Tuszynski. 2016. Automatic prediction of tumour malignancy in breast cancer with fractal dimension. Royal Society Open Science, 3 , https://doi.org/10.1098/rsos.160558. Pickover, Clifford A. 2009. The Math Book. New York: Sterling Publishing Co., Inc. Higham, Desmond. J., and Nicholas J. Higham. 2017. MATLAB Guide. Philadelphia, PA, USA: Society for Industrial and Applied Mathematics, third ed., p. 16. Falconer, Kenneth. 2013. Fractals: A Very Short Introduction. New York: Oxford University Press, p. 47. Ahmad, Ahmad. S., and N Ormiston-Smith, and Peter D. Sasieni. 2015. Trends in the lifetime risks of developing cancer in Great Britain: comparison of risk for those born from 1930 to 1960. British Journal of Cancer, 112, p. 943. Eldridge, Lynne. 2018. Cancer Cells vs. Normal Cells: How Are They Different. (https://www.verywellhealth.com/cancer-cells-vs-normal-cells-2248794). Marius, Ioanes, and Adriana Isvoran. 2006. About Applying Fractal Geometry Concepts in Biology and Medicine. Annals of West University of Timisoara: Series of Biology, 9, p. 23-30, (http://www.biologie.uvt.ro/annals/vol_9/vol_IX_23-30_Ioanes.pdf). “Normal and cancer cells structure”. Wikimedia Commons. (https://commons.wikimedia.org/wiki/File:Normal_and_cancer_cells_structure.jpg). Baish, James W., and Rakesh K. Jain. 2000. Fractals and Cancer. Cancer Research, 60 xiv, p. 3683-3688, (http://cancerres.aacrjournals.org/content/60/14/3683.abstract). Bauer, Wolfgang and Charles D. Mackenzie. 2001. Cancer detection on a cell-by-cell basis using a fractal dimension analysis. Acta Physica Hungarica Series A, Heavy Ion Physics, 14 i, p. 43-50, https://doi.org/10.1556/APH.14.2001.1-4.6. Tambasco, Mauro, and Anthony M. Magliocco. 2008. Relationship between tumor grade and computed architectural complexity in breast cancer specimens. Human Pathology, 39 5, p. 740-746, https://doi.org/10.1016/j.humpath.2007.10.001. Zobitz, Jennifer. 1987. Fractals: Mathematical monsters. Pi Mu Epsilon Journal, 8, p. 425, http://www.jstor.org/stable/24337748. Lesmoir-Gordon, Nigel, and Will Rood, and Ralph Edney. 2009. Fractals: A Graphic Guide. Icon Books, London, p. 3. “Automated Cancer Diagnosis”. British Council Turkey, accessed November 2, 2018, https://www.britishcouncil.org.tr/en/programmes/education/cubed/automated-cancer-diagnosis. Hairong, Qi, and Jonathan Head. F. 2001. Asymmetry analysis using automatic segmentation and classification for breast cancer detection in thermograms. 2001 Conference Proceedings of the 23rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 10.1109/IEMBS.2001.1017386. About the Author Thomas Higham is in year 13 studying at Bolton School Boys\’ Division. He has a great passion for maths and particularly enjoys reading about all things fractal related. He hopes to have a career in maths and maybe find his own application for fractals in applied mathematics.
- An Introduction to the Application of Statistics in Big Data
Abstract Statistics, in its modern sense, is a field of study that aims to analyze natural phenomena through mathematical means. As a highly diverse and versatile discipline, statistics has been developed not only in the areas of STEM subjects, but also in the spheres of social science, economics, and humanities. More recently, the use of statistics in big data has increased, mainly in relevance to machine learning and artificial intelligence. In amalgamation with these subjects, statistics has been used in numerical and textual analysis, and is also starting to be applied in areas previously thought of as exclusively the domain of humans, such as the arts. Although there are differences in conventional statistics and these new developments, their purpose, which is finding associations in data, remains the same. Introduction By reviewing this history and current developments of statistics, this article aims to outline the possible future trajectory for statistics in this new age of big data, as well as the increased role statistics now has in primary and secondary schools as a result of its expanding role in multiple disciplines. Moreover, it also addresses some realistic criticisms and concerns about the subject in the context of the rapidly advancing technology of our world. Historical Development While historical records date the first population census — an important part of statistics — to have been in 2AD during the Han Dynasty of China, the first form of modern statistics is cited to have emerged in 1662, when John Graunt founded the science of demography, a field for the statistical study of human populations. Among his notable contributions to the development of statistics, his creation of census methods to analyze survival rates of human populations according to age paved the way for the framework of modern demography. While Graunt is largely responsible for the creation of a systematic approach to collecting and analyzing human data, the first statistical work had emerged long before his time in the book Manuscript on Deciphering Cryptographic Messages. This was published some time in the 9th century, and the author, Al-Kindi, discusses methods of using statistical inference—the usage of statistics from sample data to derive inferences about the entire population—and frequency analysis—the study of repetition in ciphertext—to decode encrypted messages. This book later laid the groundwork for modern cryptanalysis and statistics[1]. The first step toward the development of statistics in its modern form was in the 19th century when two mathematicians, Sir Francis Galton and Karl Pearson, introduced to statistics the notion of a standard deviation, a numerical representation of the deviation of a set of data from its mean; methods of identifying correlation, a measure of the strength of a directly proportional relationship between two quantitative variables; and regression analysis, a statistical method to determine the graphical relationship between the independent variable and the dependent variable in a study. These new developments allowed for statistics to not only be more actively used to study human demographics, but also became a participant in the analysis of industry and politics. They later went on to found the first university statistics department and the first statistics journal. This was the beginning of statistics as an independent field of study[2]. The second wave of modern statistics came in the first half of the 1900s, when it started becoming actively incorporated into research works and higher education curricula. A notable contributor in this time period was Ronald Fisher, whose major publications on statistics helped outline statistical methods for researchers. He gave directions on how to design experiments to avoid unintentional bias and other human errors; described how statistical data collection and analysis methods could be improved through means such as randomized design, a data collection method where subjects are assigned random values of the variable in question which in turn removes possible unintentional bias on the part of both subjects and researchers; and set an example of how statistics could be used to explore various questions if a valid null hypothesis—the hypothesis in a statistical analysis that states that there is no significant difference (other than those as a result of human error during sampling) between two population variables in question—an alternative hypothesis—the hypothesis that states that there is a discernible difference between the two variables in a statistical study—and a set of data could be generated from the experiment. One such example was proving the existence of changes in crop yields, analyzed and published by Fisher in 1921[3]. In the latter half of the 20th century, the development of supercomputers and personal computers led to greater amounts of information being stored digitally, causing a rapid inflation in the amounts of large data. This resulted in the advent of the term “big data,” which refers to sizable volumes of data that can be analyzed to identify patterns or trends in them. Applications of big data range from monitoring large-scale financial activities, such as international trade, to customer analysis for effective social media marketing, and with this growing role of big data has come a subsequent increase in the importance of statistics in managing and analyzing it. The role of statistics in education and research Having become an official discipline at the university level in 1911, statistics has since then been incorporated into departments of education on various different levels. Notably, basic statistical concepts were first introduced to high schools in the 1920s. The 1940s and 1950s saw vigorous effort to broaden the availability of statistical education from younger years, as spurred on by the governmental and social efforts during and after the Second World War, where statistical analysis became frequently used to analyze military performance, casualties, and more. While educational endeavors laxed in the 1960s and 1970s, the boom of big data brought back the interest in statistics from the 1980s onward[4]. Presently, statistics is taught in both primary and secondary schools, and is also offered as Honor and Advanced Placement courses to many high school students hoping to study the subject at the college level and beyond.The field of statistics has also become a crucial element in research, ranging from predicting the best price of commodities based on levels of consumer demand in the commercial sphere to determining the effectiveness of certain treatments in the medical profession. By incorporating statistics into research, researchers have been able to find ways to represent the credibility of their findings through data analysis, and have also been able to find and prove causal relationships using hypothesis testing. Statistics is especially necessary and irreplaceable in research in that, as mentioned, it is the most accurate form of measuring the reliability of the results drawn from a study. Whether that be measuring the confidence interval of a population mean, or testing whether a new treatment has any effect on patients when compared with a placebo, it places mathematical limitations on the objective aspects of research[5]. Moreover, statistics allows for a study conducted on a sample from a defined population to be extended to that general population given that the research satisfies a number of conditions, the sample being randomly chosen being one such prerequisite. This is one of the greatest strengths of statistics; the ability to extend the findings from a sample to the entire population without having to analyze every single data point. Statistics in Big Data and Artificial Intelligence In the age of big data and artificial intelligence (AI), intellectual reasoning and ability demonstrated by machines as opposed to humans, statistics is being utilized in education and research more than ever. Often combined with computer science and engineering, statistics is being used in many different capacities such as generating probability models through which complex data can filter through and then generate a model of best fit[6]. Even in this day and age, statistics continues to be transformed and applied in new ways to cope with the growing size and complexity of big data, as well as the many other rapid advancements being made in artificial intelligence. While a large portion of big data consists of quantitative data, qualitative statistics also plays a large role in it. Notably, the analysis of text messages using statistical techniques by artificial intelligence has become one of the forefronts of the application of modern statistics. Text mining is the process of deriving information, such as underlying sentiments, from a piece of text. This method is intertwined with sentiment analysis, which, to put simply, is the subjective analysis of textual data. The fundamental purpose of sentiment analysis is the classification of text through its underlying sentiments — positive, negative, or neutral[7]. For example, “the chef was very friendly” has a positive underlying sentiment, while the sentence “but the food was mediocre” has a negative connotation.While previous statistical techniques were underdeveloped for sentiment analysis to work effectively, recent developments in deep learning, which is a subfield of AI dedicated to mimicking the workings of the human brain[8], has allowed for greater, more complex sentiment analysis. A main application of sentiment analysis is in natural language processing (NLP)—a field of study of how computers can analyze human language and draw a conclusion about the connotation of a piece of text—which is often used to measure sentiments within corporates’ financial statements. For example, when a top management comments on its quarterly or annual performance, the level of positivity in this comment can be analyzed through NLP. The top management report is generally a piece of unorganized text, which NLP converts into a structured format that AI can then interpret. Through this process, the performance levels of companies can be gauged more effectively and accurately. To train computers to be able to identify these implicit undertones, researchers must first provide and educate it with a set of data related to its purpose. This training method also goes beyond sentiment analysis; if a machine is being trained to recognize and locate a human face in an image, as is often used in camera applications on phones, it must be given a large data set of pictures with human faces which can then be used for training purposes.This data set can be split into three different sections; training data, validation data, and testing data. Training data is the data that helps the AI machine learn new material by picking up patterns within the set of data. Training data consists of two parts; input information and corresponding target answers. Given the input information, the AI will be trained to output the target answers as often as possible, and the AI model can re-run over the training data numerous times until a solid pattern is identified. Validation data is similarly structured to training data in that it has both input and target information. By running the inputs in the validation data through the AI program, it is possible to see whether the model is able to churn out the target information as results, which would prove it to be successful. Testing data, which comes much after both training and validation data, is a series of inputs without any target information. Mimicking real-world applications, testing data aims to recreate a realistic environment in which it will be able to run. Testing data makes no improvements on the existing AI model. Instead, it tests if the AI model is able to make accurate predictions based on this testing data on a consistent basis[9]. If it proves successful in doing so, then the program is ruled to be ready for real-world usage. An example of these types of data used to create an AI program can be found in AlphaGo. AlphaGo is a computer program designed to play Go, a two-player board game involving black and white stones that the players alternate placing. The goal is to enclose as much of the board’s territory as possible. Countless records of previous professional Go games spanning back centuries contributed to the training data used to teach the AlphaGo program. Through analyzing the different moves that were taken by the Go players, the creators of AlphaGo then set up different versions of the program to play against each other, which served as its validation data. AlphaGo’s widely broadcasted matches against professional players, most notably Lee Sedol, was the program’s testing data[10]. The quality and quantity of training data is also crucial in creating an effective AI model. A large set of refined data will aid the AI in identifying statistical patterns and thereby more accurately fulfill its purpose. Using the aforementioned facial recognition example, this point can be elaborated on more clearly; if a large set of images containing human faces are given to the AI during training, it will be able to recognize patterns within human faces, such as the existence of two eyes, a nose, and a mouth, and thereby increase its success rate in identifying faces during testing. However, if images of trees and stones are mixed into the training data, then the AI program may find it more difficult to accurately perceive patterns within the given data set, and consequently become less effective in fulfilling its initial purpose. Moreover, being given a larger set of training data allows an AI model to make more accurate predictions, since it has a larger pool of information in which it can identify and apply patterns to. Training data is used for a range of purposes, such as the aforementioned image recognition, sentiment analysis, spam detection, and text categorization. A common theme among these different types of training data, however, is the possibility of wrong methods of training. Artificial intelligence, with its ability to mimic the process of human thought, also raises possibilities of negative inputs with incorrect target results creating a machine with a harmful thought process. For example, if an AI program is continuously shown images of aircrafts being bombed, and taught that the target result should be positive, then the machine may consider terrorist bombings or warfare to be positive when applied to real life. Artificial intelligence, like all things created by mankind, retains the potential to be used for a malevolent cause. In particular, because we do not understand all of the statistical techniques being used by computers to analyze training data, we must continue to tread cautiously in our efforts to develop and understand AI through the application of statistics. The statistical methods used to understand and categorize big data are by no means as simple as those used by human statisticians; in fact, many of the mechanisms used by computers to find and analyze patterns in data sets still remain a mystery to us. They cannot be labeled with discrete descriptions such as “standard deviation” or “normal distribution.” Instead, they are an amalgamation of various complex pattern-identifying and data-processing techniques.Furthermore, the statistical techniques used in the realm of big data and artificial intelligence are somewhat different from previous applications of statistics. For example, the previously mentioned training data is a novel subject that was only incorporated into statistics after the subject’s introduction to AI. Statistics, which had almost exclusively dealt with quantitative data in the past, is now also used to analyze qualitative data, creating a necessity for this training data. Training data also indicates another difference between conventional and modern applications of statistics, which is that statistics in AI and machine learning require supervised learning to find relationships in data, while conventional statistics requires regression analysis[11]. Conventional statistics is more intuitive to humans but limited in its usage. On the other hand, statistics in AI and machine learning is essentially a black box that cannot be explained through previous rules, but proves more efficient in deriving implications from larger and more diverse sets of data.However, despite these many distinctions, the subject’s fundamental purpose has not changed; statistics, in the end, is an effort to mathematically approach phenomenon, identify patterns in data, and apply our findings to new situations. Consequently, recent developments in statistics and its traditional applications should be used in conjunction with each other, cancelling each other’s drawbacks with their strengths. Criticisms about statistics Apart from the concerns raised on the use of statistics in the realm of artificial intelligence and big data, conventional statistics also has its fair share of criticisms. As a constantly changing, improving discipline, there continues to exist imperfections in statistics that we should always be cautious of when using statistical analysis in any situation. For example, in 2012, statistician Nate Silver used statistical analysis to successfully predict the results of the presidential election for all 50 states in the U.S[12]. While this brought about much media attention to the role of statistics in fields beyond the scope of learning it was commonly associated with, this event led to what could arguably be referred to as an overreliance on statistical prediction in the next U.S. presidential election. As can be seen by this example, there certainly exists shortcomings in statistics, both in the collection of statistical data and our use of it.Among the multiple criticisms frequently made about the subject, there is a recurring theme that can be found; they often condemn how it distorts our perception of phenomena by oversimplifying it. While statistics is a tool used to conveniently perceive the message portrayed to us by large sets of data, it is, in the end, a discipline based on averages and predictions. The real world does not always act with this in mind, and therefore deviates from statistical predictions most of the time. Moreover, data analysis is mostly done in the realm of quantitative data, so qualitative aspects of socio economic phenomena are often underrepresented in statistical results. This also makes it easier for statisticians to use data to understate or exaggerate the issue at hand, therefore making some statistical data unreliable[13]. However, we do need some form of numeric representation for situations that require comparison, so utilizing statistics is necessary. This is why overreliance on statistical analysis is both easy and dangerous to do. One example is the overreliance on GDP statistics; this usually leads to the conclusion that the economic situations of most citizens of a country are improving. This is not always the case, especially for countries whose economic disparity is also widening. The individual welfare of the population is not accurately and entirely reflected in the GDP of a nation, which only tells us its overall economic status — including its corporations, government, and net exports. Therefore, relying only on GDP statistics may lead to the inaccurate analysis of the personal welfare of the people. Statistics, in the end, is a discipline of averages and predictions. No matter how much effort researchers put into refining the analysis methods of numerical data, they will always fall short of being able to fully represent a real-life phenomenon by only deploying numbers. Statistics will always fall short of giving a definite answer about virtually anything. All conclusions made about hypotheses are never certain, and comparisons between two sets of data at best give a solid prediction. However, it must also be understood that this is the very definition of statistics. Statistics serves to give a better interpretation of complicated issues by removing certain factors that bring about uncertainty during the process of research; thus, it may be too much to expect statistics to be able to give an exact one-to-one portrayal of the situation it is analyzing. It is, like all other disciplines, used best when amalgamated with other approaches and fields. The future of statistics Statistics, with its ability to explore different social phenomena using situation hypotheses and reliably interpret nonphysical trends, is a rapidly growing discipline in the modern world. With the ability to be used in conjunction with a variety of other subjects such as mathematics, economics, the social sciences, and computer science, statistics is relevant and necessary in all kinds of different fields. While the future of statistics is not entirely clear — predictions on which domain it will be used most often in, and which spheres of knowledge it will most frequently intermingle with vary — it is safe to say that statistics will be taking on a similarly important, if not greater, role in our future than it is now. Statistics has already played a large role in helping us understand general trends in data, and with the world becoming increasingly interconnected, this unique aspect of statistics will only become more necessary. Big data and artificial intelligence are becoming the centerpiece of modern technological development, and because the statistical techniques being used in these fields are very different and entirely transcendental of the statistical mechanisms previously used by human statisticians, the adaptation of data analysis and statistical usage to this new trend is all the more necessary. Amalgamated with statistics, big data and AI have been explored in numerical and textual analysis for many years. This is not, however, the boundary of their potentials; efforts are already being made to expand their usage into the field of human creation, such as the arts. A major example is the development of artificial intelligence algorithms to find similarities between paintings by various artists by a team of researchers from MIT[14]. In a world that is increasingly reliant on different types and greater amounts of big data, statistics must evolve to fit its needs, and it, at this moment, seems to be walking down the right path. References [1] “History of Statistics.” Wikipedia, Wikimedia Foundation, 15 Aug. 2020, https://en.wikipedia.org/wiki/History_of_statistics. Accessed 19 Aug. 2020. [2] “Statistics.” Wikipedia, Wikimedia Foundation, 17 Aug. 2020, https://en.wikipedia.org/wiki/Statistics. Accessed 19 Aug. 2020. [3] Fisher, R. A. “Studies in Crop Variation. I. An Examination of the Yield of Dressed Grain from Broadbalk: The Journal of Agricultural Science.” Cambridge Core, Cambridge University Press, 27 Mar. 2009, www.cambridge.org/core/journals/journal-of-agricultural-science/article/studies-in-crop-variation-i-an-examination-of-the-yield-of-dressed-grain-from-broadbalk/882CB236D1EC608B1A6C74CA96F82CC3. Accessed 6 Oct. 2020. [4] Scheaffer, Richard L, and Tim Jacobbe. “Statistics Education in the K-12 Schools of the United States: A Brief History.” Journal of Statistics Education, vol. 22, no. 2, 2014, pp. 1–14., doi:https://doi.org/10.1080/10691898.2014.11889705. Accessed 15 Aug. 2020. [5] Calmorin, L. Statistics in Education and the Sciences. Rex Bookstore, Inc., 1997. [6] Secchi, Piercesare. “On the Role of Statistics in the Era of Big Data: A Call for a Debate.” Statistics & Probability Letters, vol. 136, 2018, pp. 10–14., https://www.sciencedirect.com/science/article/abs/pii/S0167715218300865. Accessed 16 Aug. 2020. [7] Gupta, Shashank. “Sentiment Analysis: Concept, Analysis and Applications.” Towards Data Science, Medium, 19 Jan. 2018, https://towardsdatascience.com/sentiment-analysis-concept-analysis-and-applications-6c94d6f58c17. Accessed 19 Aug. 2020. [8] Brownlee, Jason. “What Is Deep Learning?” Machine Learning Mastery, Machine Learning Mastery Pty. Ltd., 16 Aug. 2019, https://machinelearningmastery.com/what-is-deep-learning/. Accessed 20 Aug. 2020. [9] Smith, Daniel. “What Is AI Training Data?” Lionbridge, Lionbridge Technologies, Inc., 28 Dec. 2019, https://lionbridge.ai/articles/what-is-ai-training-data/. Accessed 20 Aug. 2020. [10] “AlphaGo: The Story so Far.” DeepMind, Google, 2020, https://deepmind.com/research/case-studies/alphago-the-story-so-far. Accessed 6 Oct. 2020. [11] Shah, Aatash. “Machine Learning vs Statistics.” KDnuggets, KDnuggets, 29 Nov. 2016, www.kdnuggets.com/2016/11/machine-learning-vs-statistics.html. Accessed 19 Aug. 2020. [12] O\’Hara, Bob. “How Did Nate Silver Predict the US Election?” The Guardian, Guardian News and Media, 8 Nov. 2012, www.theguardian.com/science/grrlscientist/2012/nov/08/nate-sliver-predict-us-election. Accessed 21 Aug. 2020. [13] Davies, William. “How Statistics Lost Their Power – and Why We Should Fear What Comes Next.” The Guardian, Guardian News and Media, 19 Jan. 2017, www.theguardian.com/politics/2017/jan/19/crisis-of-statistics-big-data-democracy. Accessed 21 Aug. 2020. [14] Gordon, Rachel. “Algorithm Finds Hidden Connections between Paintings at the Met.” MIT News, Massachusetts Institute of Technology, 29 July 2020, https://news.mit.edu/2020/algorithm-finds-hidden-connections-between-paintings-met-museum-0729. Accessed 6 Oct. 2020.
- Experimenting pumpkin configuration to reduce radiation-induced cardiovascular disease by galactic cosmic rays for the future Moon-Mars mission
Author: Maritza Tsabitah Editor: Afreen Hossain Abstract The data obtained from the Apollo Lunar Astronauts highlights a concerning long-term risk: radiation-induced cardiovascular disease (RICVD) resulting from exposure to galactic cosmic rays (GCR). To ensure the success of future Moon-Mars missions, the development of an effective protection shield is imperative. This study aims to assess the effectiveness of a pumpkin configuration as a shield against GCR, considering its impact on the cardiovascular system, mortality rate data from the Apollo Lunar Astronauts, and insights from previous studies on magnetic shielding. The findings suggest that the pumpkin configuration holds promise in shielding against GCR, but further refinement of the concept and innovative advancements are needed for swift code implementation. This study advocates for considering the pumpkin configuration as an alternative, aligning it with NHS training and the recent Orion Protection Plan, HERA, which can efficiently detect and categorize space radiation. Introduction In 2022, the Artemis Program heralded the continuation of the Moon to Mars expedition and extended its policy for International Space Station (ISS) operation. This signifies the onset of exploration beyond Low Earth Orbit (LEO) in the foreseeable future. However, ensuring the health of astronauts remains a primary concern for the success of these missions, particularly given the challenges posed by space radiation. The primary sources of space radiation include Solar Particle Events (SPE) and Galactic Cosmic Rays (GCR). While various active shielding methods can insulate against SPE, the formidable challenge lies in shielding against GCR, primarily due to the presence of high-energy ions (HZE) that have the potential to damage physiological functions, notably the cardiovascular system. During a Mars mission, astronauts may be exposed to approximately 1Sv of GCR, resulting in a potential 5% increase in cardiovascular damage. The impacts of GCR Galactic cosmic rays (GCR) harbor the potential to instigate severe pathologies, encompassing fatal conditions like cancer, cardiovascular disease, and organ inflammation. A retrospective analysis of the mortality data of 20 deceased US astronauts spanning from 1959 to 1991 underscores the gravity of cardiovascular implications, attributing 10% of deaths to cardiovascular disease and 5% to cancer. This investigation illuminates the vulnerability of the human cardiovascular system to the deleterious effects of space radiation. In a meticulous examination, patients exposed to radiation were compared with those diagnosed with radiation-induced cardiovascular disease (RICVD), specifically through chest X-ray and gamma radiation. The outcomes are unmistakable, with RICVD demonstrating an alarming radiation dose of 500Gy, significantly surpassing the range of chest X-rays (0.1 - 120 Gy) and gamma radiation (more than 3Gy) . The composite elements of GCR, prominently featuring hydrogen (H), iron (Fe), helium (He), and silicon (Si), impart a spectrum of detrimental impacts on the cardiovascular system. These repercussions encompass endothelial dysfunction in the aortic wall, identified as the primary instigator of RICVD, myocardial damage due to apoptosis, perpetuation of a chronic inflammatory state, upregulation of oxidative enzymes, and DNA double-strand destruction. Notably, astronauts in low earth orbit (LEO) may find reprieve from radiation exposure owing to the protective influence of the magnetosphere. However, those venturing beyond this protective shield face heightened susceptibility. Data extrapolated from the Apollo Lunar astronauts accentuates this risk, revealing a cardiovascular mortality rate 4-5 times higher than their LEO counterparts. As the adage goes, anticipation becomes paramount in navigating the intricate interplay between space exploration and cardiovascular well-being. Experiment with shielding: pumpkin configuration Numerous initiatives have been undertaken to explore Active Shielding Methods (ASM) to counter the formidable energy of Galactic Cosmic Rays (GCR). Diverging from the effectiveness observed in Solar Particle Events (SPE), GCR proves resistant to preventive measures involving Electrostatic Fields (EF) and Plasma Shielding (PS), primarily due to their focus on protons rather than High-Energy Ions (HZE). Intriguingly, past research has unveiled a promising avenue in the utilization of Superconducting Materials (SM) within the unique framework of the Pumpkin Configuration (PC). High-temperature superconductors, such as Niobium-Titanium (NbTi) and Niobium-Tin (NbSn), have emerged as the most efficient options per unit mass for Active Shielding against Space Radiation (ASR). Leveraging the Lorentz Force (LF), these materials induce particle motion perpendicular to the Magnetic Field (MF), thereby altering the trajectories of charged particles. Recent advancements in the study of the Pumpkin Configuration (PC) have notably propelled the transition of Magnetic Shielding Concepts (MSC) from theoretical constructs to practical applications in Spacecraft Design (SD). Result The pumpkin configuration refers to a multiple toroid magnet system in which each toroid is built with three racetrack coils and has a lower construction mass than a typical toroidal structure. Furthermore, its magnet can defend a 1083m volume (5m diameter by 5.5m long), reducing the dose of free space by 45%, and it weighs 54. This could reduce the GCR and allow for adequate dosages to be absorbed. However, it would not be able to totally protect the astronauts from the GCR because there are no recognized radiation limitations for the mission and the long-term physical impact has not been well examined. This approach has the potential to limit the exposure, but it has yet to be thoroughly researched and improved. Another reason that would make implementation difficult is the large number of codes required to model the heavy ions and licensing issues. Alternatives Nonetheless, there are techniques to safeguard the astronauts' cardiovascular health in preparation for the near-term trip. NASA collaborated with Orion to develop the Orion Radiation Protection Plan, which includes the Hybrid Electronic Radiation (HERA) operation process. HERA is designed to advise crew members if they need to seek shelter in the radiation case and to characterise the sort of shielding they encounter. Non-Technical Skill (NTS) training has also been proven to reduce RICVD by 18% in 1-year death rates. The compatibility of the NTS training in leadership, teamwork, situation awareness, and surgical skills will result in better mission, safety, and health results for the crews. Conclusion The space mission beyond the LEO is limited by GCR which contains HZE as a leading cause of radiation-induced cardiovascular disease among the Apollo Lunar astronauts. Many people have devised a solution to protect astronauts by employing a magnetic shield with a pumpkin configuration of superconducting materials and the Lorentz force. The concept, however, has not been thoroughly tested and developed. It also requires a large number of codes to be deployed for HZE. As an alternative to the neartime mission, the collaboration between NASA and Orion in establishing the Orion Radiation Protection Plan would be an effective way to reduce the risk of RICVD. The NTS training also has shown positive outcomes to ensure mission success with safe and healthy astronauts, which could be a fundamental requirement for the Moon-Mars astronauts. References Ferone, Kristine , Charles Willis, Fada Guan, Jingfei Ma, Leif Peterson, and Stephen Kry. "A Review of Magnetic Shielding Technology for Space Radiation." Radiation 1, no. 3 (2023): 46-57. https://doi.org/10.3390/radiation3010005 Huff, Janice L., Lanik Plante, Steve Blattnig, Ryan B. Norman, Mark P. Little, Amit Khera, Lisa C. Simonsen, and Zarana S. Patel. "Cardiovascular Disease Risk Modeling for Astronauts: Making the Leap From Earth to Space." Frontiers 9, (2022). https://doi.org/10.3389/fcvm.2022.873597 Meerman, Manon, Tom C. Gartner, Jan W. Buikema, Sean M. Wu, Sailay Siddiqi, Carljin V. Bouten, K J. Grande-Allen, Willem J. Suyker, and Jesper Hjortnaes. "Myocardial Disease and Long-Distance Space Travel: Solving the Radiation Problem." Frontiers 8, (2021). https://doi.org/10.3389/fcvm.2021.631985 Townsend, L. W. “Overview of active methods for shielding spacecraft from energetic space radiation.” 1st International Workshops on Space Radiation Research and 11th Annual NASA Space Radiation Health Investigators' Workshop Arona, (2000): 1-2. Abadie, L. J., Cranford, N., Lloyd, C. W., Shelhamer, M. J., and Turner, J. L. “The Human Body in Space.” NASA, February 3, 2021. Accessed July 15, 2023. https://www.nasa.gov/hrp/bodyinspace Delp, M. D., Charvat, J. M., Limoli, C. L., Globus, R. K., and Ghosh, P. “Apollo Lunar Astronauts Show Higher Cardiovascular Disease Mortality: Possible Deep Space Radiation Effects on the Vascular Endothelium.” Scientific Reports, (2016). https://doi.org/10.1038/srep29901 Al Zaman, M. A., Maruf, H. A., Islam, M. R., and Panna, N. “Study on superconducting magnetic shields for the manned long termed space voyages.” The Egyptian Journal of Remote Sensing and Space Science 24, no.2 (2021): 203-210. https://doi.org/10.1016/j.ejrs.2021.01.001 Norbury, J. W., Schimmerling, W., Slaba, T. C., Azzam, E. I., Badavi, F. F., Baiocco, G., . “Galactic cosmic ray simulation at the NASA Space Radiation Laboratory.” Life Sciences in Space Research 8 no.2 (2016): 38-51. https://doi.org/10.1016/j.lssr.2016.02.001 Bird, E., Hargens, A. R., and Petersen, L. G. “Magnitude of Cardiovascular System Response is Dependent on the Dose of Applied External Pressure in Lower Body Negative and Positive Pressure Devices.” Frontier, (2019). https://doi.org/10.3389/conf.fphys.2018.26.00031 Robertson, J., Dias, R. D., Gupta, A., Marshburn, T., Lipsitz, S. R., & Pozner, C. N. (2020). “Medical Event Management for Future Deep Space Exploration Missions to Mars.” Journal of Surgical Research, 246, 305-314. https://doi.org/10.1016/j.jss.2019.09.065
- The Leap Year and Orbital Dynamics on Earth
Author: Abhipsha Sahu Introduction Happy Leap Year! A quirk of our current calendar system is the fact that every four years, an additional day gets added to the month of February, giving us the 29th of February: the leap day. The leap day owes its existence to the fact that the Earth takes about 365.25 days to orbit the sun. That quarter of a day adds up to a full missing day every four years. The concept of leap years often leads us to feel like the passage of time is rather arbitrary- and to some degree, it is! The leap day is why, even across timescales of hundreds of years, January is always a cold month in the northern hemisphere, and why the southern hemisphere always has warm christmases. It keeps things consistent between the solar year and our calendar year. Leap years are just one consequence of the earth’s orbital characteristics, but what else does it mean for us? Goldilocks and the Origin of Life Perhaps the most unique thing about the Earth’s orbit around the sun is its distance from the sun. The Earth is often described to lie in what is known as the “Goldilocks Zone” which is a region where water can exist in a primarily liquid form. Liquid water on earth has long been thought to be the reason life exists, as biological models and existing archaeological evidence indicate that early life originated in the Earth’s oceans. By far the most important quirk of the Earth’s orbit is that it exists within the region that allowed life to exist. Given that Earth is the only planet currently known to be inhabited by living organisms, our unique orbit is crucial to our very lives. An orbit too far or too close to the sun would’ve made the evolution of life on earth completely impossible. Seasons in the Sun-Earth Orbital System The most obvious effect that the Earth’s orbit around the sun has is that it creates seasons. It is a popular misconception that seasons arise due to the physical distance from the earth to the sun changing throughout the course of a year. While it is true that due to the Earth’s orbit being elliptical, it is sometimes further away from the sun than at other times of the year, this difference is not significant enough to cause significant seasonal variation. In reality, seasonal variation is almost entirely a result of the tilt of the earth’s rotational axis, also known as its “obliquity”, as the planet moves around the sun. This results in the distribution of sunlight across the earth being uneven across the two hemispheres. During summer months, the incoming solar radiation is simply more “direct”, resulting in hotter weather. This axial tilt also explains why days are longer in the summers and shorter in the winters. The sun’s rays are more direct throughout summer months, and therefore cover the surface for a longer time. The axial tilt is also why seasons in the northern and southern hemispheres are always opposed. While one hemisphere receives higher intensity solar radiation and experiences summer, the other is tilted away, thus giving rise to winter. What do Ice Ages and the Pole Stars Have in Common? The earth’s orbit is also responsible for long-term variation in the Earth’s climate, through a series of cyclic changes known as Milankovitch Cycles. These cycles are governed by changes related to three main characteristics: the Earth’s axial tilt or Obliquity, the shape of Earth’s orbit or Eccentricity, and the direction in which the Earth’s rotational axis points, or Precession. The angle at which the Earth’s axis is tilted varies between two extremes, about 21.4 degrees and 24.5 degrees. At greater angles, the differences between seasons is sharper. Therefore, over millions of years, seasonal variation becomes more extreme before gradually becoming more uniform. The eccentricity of the Earth’s orbit is a measure of how elliptical it is. While planetary orbits generally have quite low eccentricities, the fact that the Earth is sometimes closer to the sun and sometimes further away does have small impacts on its climate by impacting the length of seasons. The eccentricity of the earth oscillates in an approximately hundred-thousand year long cycle. While this variation doesn’t significantly impact the Earth’s climate across short-term time scales, it will over long time scales. At higher eccentricities, certain seasons will be significantly longer than others. When the orbital eccentricity is lowest, this variation decreases to almost none. Precession is a phenomenon by which the direction in which the Earth’s axis periodically “wobbles” and changes direction. Although this change is slow, it eventually leads to variation in the Earth’s climate by controlling how extreme seasons are in each hemisphere by controlling which one experiences summer at perihelion. Aside from climatic variation, precession is also why the pole stars change every few tens of thousands of years. The Milankovitch cycles combined are responsible for long-term climatic patterns like ice ages. The effect of Earth’s orbit on seasonal variation is still an active field of research, as mapping out these long-term patterns can get quite complicated and many questions regarding the subject still remain unanswered. Conclusion Seasonal weather patterns and climatic cycles similar to the Milankovitch Cycles are not unique to the earth, and are an almost universal consequence of planetary orbital dynamics. However, at this point in human history, it is only on Earth that they directly affect us. Perhaps one day, a multiplanetary human race will investigate the various ways in which planets’ orbits affect everyday life. For now, plenty remains to be understood about our own planet’s movement around our sun. In 2024, we can celebrate the 29th of February as one such lovely consequence. Once again, Happy Leap Year! References [1] https://warwick.ac.uk/newsandevents/knowledgecentre/science/physics-astrophysics/leap_years/ [2] https://ugc.berkeley.edu/background-content/earths-spin-tilt-orbit/ [3] https://climate.nasa.gov/news/2948/milankovitch-orbital-cycles-and-their-role-in-earths-climate/ [4] https://www.treehugger.com/everything-you-need-to-know-about-earths-orbit-and-climate-cha-4864100
- Number Theory
Author: Afreen Hossain Introduction: Number theory is a fascinating branch of mathematics that deals with the properties and relationships of integers. It may sound intimidating, but at its core, number theory explores the fundamental nature of numbers. In this beginner's guide, we'll take a journey through the basics of number theory, unraveling the mysteries that lie within the world of integers. A few topics under number theory are: The Foundation: Integers Let's start with the basics. Integers are whole numbers, both positive and negative, including zero. They form the foundation of number theory. Examples of integers are -3, -2, -1, 0, 1, 2, 3, and so on. Number theory focuses on understanding the unique properties and patterns within this set of numbers. Divisibility and Factors A key concept in number theory is divisibility. An integer 'a' is said to be divisible by another integer 'b' if 'a' can be expressed as 'b * c', where 'c' is also an integer. For example, 15 is divisible by 3, as 15 = 3 * 5. Factors are integers that divide a given number without leaving a remainder. For instance, the factors of 12 are 1, 2, 3, 4, 6, and 12. Number theory delves into understanding the properties of these divisors and how they relate to the integers. Prime Numbers Prime numbers are a crucial element in number theory. A prime number is a positive integer greater than 1 that has no positive divisors other than 1 and itself. Examples include 2, 3, 5, 7, and 11. Every positive integer can be uniquely expressed as a product of prime numbers, a concept known as the Fundamental Theorem of Arithmetic. Greatest Common Divisor (GCD) and Least Common Multiple (LCM) The GCD of two integers is the largest positive integer that divides both numbers. For example, the GCD of 8 and 12 is 4. The LCM of two integers is the smallest positive integer that is a multiple of both numbers. For instance, the LCM of 8 and 12 is 24. These concepts are fundamental in solving problems related to divisibility and factors. Modular Arithmetic Modular arithmetic is a fascinating aspect of number theory that deals with remainders. In modular arithmetic, we work with the remainder when dividing one number by another, known as the modulus. It has applications in cryptography, computer science, and various other fields. Quadratic Residues Quadratic Residues and Non-Residues: In modular arithmetic, quadratic residues are squares of integers that leave the same remainder when divided by a particular modulus. Non-residues are numbers that are not quadratic residues. Law of Quadratic Reciprocity: A fundamental result in number theory that establishes a relationship between the solvability of quadratic congruences with different moduli. Arithmetic Functions Euler's Totient Function: Counts the positive integers up to a given number that are coprime (have no common factors) with that number. Möbius Function: A function defined on the positive integers with applications in number theory. Ramanujan's Sum: A type of series discovered by the Indian mathematician Srinivasa Ramanujan. Additive Number Theory Partition Theory: Deals with ways of expressing a number as the sum of positive integers. Goldbach's Conjecture: Posits that every even integer greater than 2 can be expressed as the sum of two prime numbers. Waring's Problem: Explores the representation of numbers as sums of powers of positive integers. Elliptic Curves Elliptic Curve Arithmetic: Studies the properties of elliptic curves and their points. Applications in Cryptography: Elliptic curve cryptography utilizes the difficulty of solving certain problems related to elliptic curves for secure communication. Cryptography RSA Algorithm: A widely used public-key encryption method based on the difficulty of factoring large composite numbers. Diffie-Hellman Key Exchange: Allows two parties to establish a shared secret key over an untrusted communication channel. Applications of Number Theory in Modern Cryptography: Utilizes number theory concepts to ensure the security of cryptographic systems. These are only a few illustrations of the wide topic of number theory, which has numerous links and is still being studied. It can be used in many branches of information theory, computer science, and cryptography in addition to pure mathematics. References: YouTube: Home, 9 November 2017, https://www.programmersought.com/article/22024550501/. Number Theory and Cryptography I. Introduction, https://pi.math.cornell.edu/~mec/2008-2009/Anema/numbertheory/intro.html. YouTube: Home, 9 November 2017, https://www.profaccred.com/number-theory/. YouTube: Home, 9 November 2017, https://www.vaia.com/en-us/explanations/math/pure-maths/number-theory/. “Number Theory - Definition, Examples, Applications.” Cuemath, https://www.cuemath.com/numbers/number-theory/.
- Protecting the Planet's Pollinators
Author: Elianna Gadsby Editor: Afreen Hossain Why are bees so important? As we know bees are crucial in today's society. In fact, pollinators such as bees are responsible for 75% of pollination worldwide, as well as this, 1 in 3 mouthfuls of our food is dependent on pollinators such as bees. Researchers have also discovered that there is a direct correlation between pollination and the nutritional value of food. However, their numbers are depleting due to various diseases threatening large parts of our food supply, an example of one of these diseases is American Foulbrood (AFB). What is American Foulbrood disease? AFB is caused by bacterial spores of the Paenibacillus. It is one of the most devastating bee diseases. Young honey bees ingest the spores in their food, and in 1-2 days the spores take root in their gut sprouting out rod-like structures. These rods rapidly multiply before invading the blood and body tissues killing the larvae from the inside out. Due to its highly infectious nature, previously if a colony was to become infected it had to be burnt or buried deeply. What is the solution? Fortunately, the world’s first vaccine to combat AFB was approved for US usage on the 4th of January 2023. The mechanism of the vaccine is very interesting. The dead Paenibacillus bacteria is ingested orally in the Queen’s royal jelly before she is introduced into the hive. The vaccine contents are then transferred to the bee’s fat body for storage. The vitellogenin (which are yolk proteins) binds to pieces of the vaccine and delivers these specific immune elicitors to the Queen bee’s eggs in the ovaries. The developing larvae are now vaccinated and are more immune to the infection as they hatch. However, the exact mechanism of how the immune elicitors can enter the insect eggs is still unknown. Whilst this vaccine is not a cure, it has decreased the risk of AFB by 30-50%. Why do I find the vaccine so interesting? I personally find this vaccine fascinating as it was previously thought that insects do not possess any kind of long-lasting immunity, in the last decade scientists discovered features in the bee that could suggest a primitive immune system in the Queen bee. As well as this new research has shown that insects can genetically pass information from one generation to the next through a mechanism called Transgenerational Immune Priming. As bees do not produce traditional antibodies it was thought that a vaccine was not possible. Thanks to these recent discoveries, a vaccine to target one of the most aggressive apian diseases was created. What are the future applications of this technology? The vaccine not only contributes to enhancing colony health but also possesses medicinal properties, fostering the growth of commercial beekeeping through the production of items like medicinal honey and wax. There is optimism that it will open avenues for the development of other insect vaccines. Also, Dalan Animal Health, the company behind the AFB vaccine, is actively engaged in creating similar vaccines to address European Foulbrood (EFB). Currently in the pre-clinical phase is a combined AFB/EFB vaccine, while a vaccine for Chalkbrood is nearing the pre-clinical stage. When inquiring about the future prospects of the vaccine, Hichole Hoffman, the Operations Manager at Dalan Animal Health, stated, "Dalan is working to expand the pipeline into other honeybee diseases. This research has the potential to impact many invertebrates, such as mealworms, shrimp, and other insects. Our vaccine and platform technology are pioneering a new era in the insect health sector, revolutionizing how we care for invertebrates." References [1] Graham, Flora. “Daily Briefing: World’s First Vaccine for Bees.” Nature, 11 Jan. 2023, https://doi.org/10.1038/d41586-023-00058-5. [2] “How Does the World’s First Vaccine for Honeybees Work? “It’s like Magic.”” Www.cbsnews.com, www.cbsnews.com/news/first-vaccine-honeybees-its-like-magic/. [3] Magazine, Smithsonian, and Sarah Kuta. “The World’s First Vaccine for Honeybees Is Here.” Smithsonian Magazine, 9 Jan. 2023, www.smithsonianmag.com/smart-news/the-worlds-first-vaccine-for-honeybees-is-here-180981400/. [4] “Science — Dalan Animal Health.” Www.dalan.com, www.dalan.com/science.
- Novel anticancer mechanisms in animals
Author: Himanshu Sadulwad Cancer is the second leading cause of deaths in the world next to cardiovascular diseases. Even more agonizing than the mortality rate is the physical and mental suffering associated with it. The question commonly asked is, 'Will there ever be a cure for cancer?' The answer to this simple question is difficult, because cancer is not one disease but many disorders that share a profound growth dysregulation. The only hope for containing this disease is to study its development and pathogenicity. Many model organisms have been incorporated to study these properties. While studying these organisms it was observed that certain organisms such as the naked mole rat, blind mole rat, certain bats, elephants and whales are resistant to cancer. Studying these organisms can help us understand the onset of the disease and the natural defense mechanisms of the body against these. Cancer and it's onset Cancer is characterized by loss of control of cellular growth and development leading to excessive proliferation and spread of cells. Characteristics of cancer cells Loss of contact inhibition: Normal cells are characterized by contact inhibition that is they form monolayers and cannot move away from each other. Cancer cells can form multiple layers. Metastasis: It refers to the spread of cancer cells from the primary site of origin to other tissues of the body where they produce secondaries. Loss of anchorage dependence Increased rate of replication and transcription Increased glycolysis Molecular basis It is caused by genetic changes in a single cell resulting in its uncontrolled multiplication. Oncogenes The genes capable of causing cancer are called oncogenes. Their sequences in a normal cell are termed as protooncogenes. Activation of a protooncogene to an oncogene Mechanisms include: Viral insertion into chromosome Chromosomal translocation Gene amplification Point mutation Factors causing oncogene activation Environmental factors Mutations Oncogenic viruses Inactivation of antioncogenes Defense mechanisms against cancer Different species require different number of mutations 'hits' that is inactivation of a specific gene for malignant transformation. Two hits are required for transformation of mouse fibroblasts, namely inactivation of either Trp53 or Rb1 and activation of Hras. In contrast, 5 hits are required to transform human fibroblasts: Inactivation of: TP53 (Tumor protein 53 or p53 is a tumor suppressor protein) RB1 (Retinoblastoma associated protein) PP2A (Protein phosphatase 2A) Constitutive activation of: Telomerase HRAS (Harvey rat sarcoma) The need for anticancer mechanisms This data suggests that humans have evolved more robust anticancer defense mechanisms than these mice. Evolutionary pressure to evolve anticancer mechanisms is very strong because an animal developing cancer prior to its reproductive age would leave no progeny. Thus, animals developed efficient anticancer mechanisms to delay the onset of tumors until post-reproductive age. Hence, cancer becomes more frequent in aged animals once they are no longer subject to natural selection. This implies that animals with a longer lifespan will develop more robust anticancer defenses which keep them cancer free until after their reproductive ages. Another factor influencing the risk of cancer is body size. Larger animals have more somatic cells and can accumulate more mutations, thus statistically increasing the risk of cancer development. To counteract this risk large-bodied species have evolved more efficient tumor suppressor mechanisms. Therefore, novel and more sophisticated anti-cancer strategies are found in long-lived and large-bodied mammals. The molecular mechanisms of cancer resistance are an area of interest for cancer research. These mechanisms have been evolutionarily selected over millions of years. Understanding these may hold the key to enhance cancer resistance in humans. General study of anticancer mechanisms in species Telomerase is a ribonucleoprotein that replicates the repetitive sequences at the ends of chromosomes, known as telomeres. It must be de-repressed to transform human cells. But it is constitutively active in the mouse. DNA polymerases cannot fully replicate chromosome ends, as they require an RNA primer to start. This is referred to as the ‘end replication problem'. Rebuilding chromosome ends is accomplished by telomerase, which carries its own RNA template. In most human somatic cells, expression of the protein component of telomerase TERT is silenced during embryonic differentiation. Due to this when cells divide, their telomeres shorten which eventually leads to replicative senescence. This is an important tumor suppressor mechanism which limits cell proliferation. Thus, mice are already a step closer to malignant transformation as they constitutively express telomerase. There is a defined mass threshold of 5,000 to 10,000 g after which telomerase activity is repressed in the majority of somatic cells. This shows that to counteract the statistical probability of developing tumors due to a larger body mass, these organisms evolved the mechanism of replicative senescence. It is also observed that larger and longer lived species require more hits for transformation as compared to smaller and shorter lived species. Small and shorter lived species require inactivation of Trp53 or Rb1 along with an activating mutation in Hras to form tumors. However, small species which have longer lifespans required both Trp53 and Rb1 to be inactivated. In contrast larger species require constitutive activation of telomerase along with the aforementioned changes to develop a tumor. Larger and longer lived species further required the inactivation of PP2A. This indicates that body mass and lifespan play a vital role in shaping the various tumor suppressor mechanisms. It can be argued that replicative senescence should have been evolutionarily selected in small animals to prevent tumor growth. This problem can be solved with the simple hypothesis that in small organisms, a benign tumor arising prior to short-telomere mediated growth arrest would be hazardous for a small organism. A 3g tumor greatly impedes the movements of a 30g mouse but would be inconsequential to a 50kg organism. Hence, small bodied, long lived organisms developed a mechanism which restricts cell proliferation early that is at the hyperplasia stage. There exist some animals who possess such robust anticancer mechanisms that they are almost cancer resistant. Let us study the defense mechanisms in some of them. Naked mole rat The naked mole rat (Heterocephalus glaber) is a mouse sized rodent that inhabits subterranean tunnels in East Africa. Due to a constant underground temperature it has no need for insulation and has lost its fur. It is the longest living rodent with a lifespan of 32 years in captivity. Out of thousands of these animals observed only 6 cases of neoplasms were reported which occurred due to exposure to greater light and temperature ranges. The naked mole rat is a small, long-lived mammal and hence does not rely on replicative senescence. Rather it relies on early acting, anti hyperplastic tumor suppressor mechanisms. Its arsenal of anticancer mechanisms include: Early contact inhibition It has a modified form of contact inhibition which is early acting and arrests cell proliferation at stages earlier to the formation of a dense monolayer. It is triggered by activation of p16INK4A rather than p27 which is the activator in humans. If the gene encoding p16INK4A which is Cdkn2aINK4A is silenced, normal contact inhibition occurs via p27. To completely nullify contact inhibition loss of both the genes Cdkn2aINK4A and Cdkn1b (which codes for p27) is required. Thus these rats have an increased level of protection. pALT The Cdkn2a-Cdkn2b is a locus that contains key tumor suppressor genes. In humans it encodes cyclin dependent kinase (CDK) inhibitors p15INK4B, p16INK4A and a p53 activator protein ARF. However, in the naked mole rat due to alternative splicing, pALT is produced which acts as a potent CDK inhibitor. High molecular mass hyaluronan Hyaluronan is a linear glucosaminoglycan the major non-protein component of the extracellular matrix. Longer molecules of hyaluronan have anti-proliferative, anti-inflammatory and anti-metastatic properties. Naked mole rats have hyaluronan molecules 6 to 30 times longer than those in humans. This occurs due to two factors. The hyaluronan synthase 2 gene (Has2) has a unique sequence leading to higher production. The second is that hyaluronidases have reduced activity in their tissues. Inactivation of Tp53 and Rb1 The inactivation of these tumor suppressors causes apoptosis in naked mole rat cells as opposed to rapid proliferation which occurs in human cells. Similarly, inactivation of Cdkn2aARF, which reduces activity of p53 also triggers senescence in them. Additional mechanisms of cancer resistance in naked mole rat cells include high fidelity protein synthesis, more active antioxidant pathways and more active proteolysis. Blind mole rat The blind mole rat (Spalax ehrenbergi) has a lifespan of 21 years and is resistant to cancer. The modifications in this organism include Reduced p53 activity The strictly subterranean life of the blind mole rat resulted in its increased tolerance to hypoxia. To avoid hypoxia induced apoptosis, it has a modified Tp53 sequence which weakens p53. Concerted cell death It was observed that after 12-15 population doubling, the entire culture of blind mole rat cells dies within 3-4 days via a combination of necrotic and apoptotic processes. It is mediated by a massive release of INFß into the medium. This suggests that blind mole rat cells are acutely sensitive to hyperplasia. Production of HMM-HA The high molecular mass hyaluronan slows proliferation of tumor cells. Reduced activity of heparanase Heparanase is an endoglycosylase that degrades heparin sulphate on the cell surface and in the ECM. The blind mole rat expresses a splice variant of heparanase that acts as a dominant negative which inhibits matrix degradation. This along with abundant expression of HMM-HA results in a more structured ECM that restricts tumor growth and metastasis. Elephants and whales Peto's paradox In 1977, Peto noted that while humans have 1000 times more cells than a mouse and are much longer-lived, human cancer risk is not higher than that in the mouse. This observation was seemingly inconsistent with the multistage carcinogenesis model according to which individual cells become cancerous after accumulating a specific number of mutational hits. This contradiction became known as Peto’s paradox. An answer to Peto’s paradox is that different species do not need the same number of mutational hits. In other words, large-bodied and long-lived animal species have evolved additional tumor suppressor mechanisms to compensate for the increased numbers of cells. Furthermore, many large animals are also long-lived, hence they need additional protection from cancer over their lifespan. Anticancer mechanisms in elephants Elephants possess 19 extra copies of the TP53 gene. All the additional copies appear to be pseudogenes and contain deletions. Some of these are transcribed from neighboring transposable element derived promoters. Transcripts from two of the 19 TP53 pseudogenes are translated in elephant fibroblasts. However, all the additional copies of TP53 are missing DNA binding domains and the nuclear localization signal and, therefore, cannot function as transcription factors. Elephant cells have an enhanced p53-dependent DNA damage response leading to an increased induction of apoptosis, compared to smaller members of the same family, such as armadillo and aardvark. Although the precise mechanism of action of the novel forms of TP53 is not known, it was proposed that their protein products may act to stabilize the wild type p53 protein by binding to either the wild type p53 molecule itself or to its endogenous inhibitors, the MDM2 proteins. Anticancer mechanisms in whales Comparative genomic and transcriptomic studies in the bowhead whale identified genes under positive selection linked to cancer and aging, as well as bowhead whale-specific changes in gene expression, including genes involved in insulin signaling pathways. Notable examples of positively selected genes are excision repair cross complementation group 1 (ERCC1), which encodes a DNA repair protein and uncoupling protein 1 (UCP1), which encodes a mitochondrial protein of brown adipose tissue. In addition, these studies identified copy number gains and losses involving genes associated with cancer and aging, notably a duplication of proliferating cell nuclear antigen (PCNA). Since both ERCC1 and PCNA are involved in DNA repair, these proteins may protect from cancer by lowering mutation rates; thus whales may not need extra copies of TP53 because their cells do not accumulate cancer causing mutations and do not reach a pre-neoplastic stage. Slower metabolism of the largest mammals may lead to lower levels of cellular damage and mutations, and thus contribute to lower cancer incidence. Conclusion The reason for diversity in tumor suppressive mechanisms is that the need for more efficient anticancer defenses has arisen independently in different phylogenetic groups. As species evolved larger body mass and longer lifespan, depending on their ecology, the tumor suppressor mechanisms had to adjust to become more efficient. In each case, the ecology and unique requirements of individual species would determine the outcome. While the ultimate goal of cancer research is to develop safe and efficient anticancer therapies as well as preventative strategies, what can be learnt from tumor-prone models has its limitations. Mice simply do not possess anticancer mechanisms that humans do not already have. With regard to inherently cancer resistant species, the potential for improving the development of anticancer therapies is much greater. Anticancer adaptations that evolved in these species may be missing in humans and if introduced into human cells could result in increased cancer resistance. For example, humans did not evolve HMM-HA, as they do not lead a subterranean lifestyle; hence, activating similar mechanisms in humans may be beneficial. HA is a natural component of human bodies and is well tolerated. Therefore, identifying strategies to systemically upregulate HMM-HA in human bodies may serve in cancer prevention for predisposed individuals or as a cancer treatment. Nature is a treasure trove of resources and while we seek an ideal anticancer mechanism, the answer may already be out there. Understanding the molecular mechanisms of multiple anticancer adaptations that evolved in different species and then developing medicines reconstituting these mechanisms in humans could lead to new breakthroughs in cancer treatment and prevention. References Cleeland CS, et al. Reducing the toxicity of cancer therapy: recognizing needs, taking action. Nat Rev Clin Oncol. 2012;9:471–478. doi: 10.1038/nrclinonc.2012.99. Lipman R, Galecki A, Burke DT, Miller RA. Genetic loci that influence cause of death in a heterogeneous mouse stock. J Gerontol A Biol Sci Med Sci. 2004;59:977–983. Szymanska H, et al. Neoplastic and nonneoplastic lesions in aging mice of unique and common inbred strains contribution to modeling of human neoplastic diseases. Vet Pathol. 2014;51:663–679. doi: 10.1177/0300985813501334. Rangarajan A, Hong SJ, Gifford A, Weinberg RA. Species- and cell type-specific requirements for cellular transformation. Cancer Cell. Prowse KR, Greider CW. Developmental and tissue-specific regulation of mouse telomerase and telomere length. Proc Natl Acad Sci U S A. 1995;92:4818–4822. Keane M, et al. Insights into the evolution of longevity from the bowhead whale genome. Cell reports. 2015;10:112–122. doi: 10.1016/j.celrep.2014.12.008. Abegglen LM, et al. Potential Mechanisms for Cancer Resistance in Elephants and Comparative Cellular Response to DNA Damage in Humans. Jama. 2015;314:1850–1860. doi: 10.1001/jama.2015.13134. Nat Rev Cancer. 2018 Jul; 18(7): 433–441.PMC6015544
- Building a Simple Image Viewer with Tkinter and Pillow in
Author: Afreen Introduction: In the realm of graphical user interfaces (GUI), 's Tkinter library provides a versatile toolkit for creating interactive applications. When combined with the powerful image processing capabilities of the Pillow library, you can easily develop a simple yet effective image viewer. In this article, we will explore a script that utilizes Tkinter and Pillow to create an interactive image viewer with navigation buttons and a slideshow feature. The highlighted portions contain the codes and corresponding explanations either above or below them. To code along with the project, you can go to the following GitHub repository link: https://github.com/AfreenInnovates/image-slider Setting Up the Environment: Before diving into the code, ensure you have Tkinter and Pillow installed in your environment. You can install them using the following commands: pip install tk pip install Pillow Understanding the Code: Now, let's break down the code step by step: 1. Importing Libraries: The script begins by importing the necessary libraries: from tkinter import * from PIL import ImageTk, Image Tkinter is employed for creating the graphical user interface, while Pillow handles the loading and processing of images. 2. Creating the Tkinter Window: root = Tk() # Any name can be used instead of root. This line initializes the main Tkinter window, serving as the container for the graphical elements. 3. Loading and Storing Images: The script loads a set of images and stores them in a list: my_img_1 = ImageTk.PhotoImage(Image.open("images/d1.jpg")) my_img_2 = ImageTk.PhotoImage(Image.open("images/d2.jpg")) # ... Repeat for other images. Here, d1.jpg and so on are images in the folder “images”. If you just want to access one image, then just type: my_img_1 = ImageTk.PhotoImage(Image.open("d1.jpg")) # Check the repo link to understand. # Storing all images in a list (same as array). my_images = [my_img_1, my_img_2, my_img_3, my_img_4, my_img_5] # The images are converted into Tkinter PhotoImage objects and organized into a list named `my_images`. 4. Displaying the first image: my_label_1 = Label(root, image=my_img_1) my_label_1.grid(row=0, column=0, columnspan=3) # This code creates a Tkinter Label widget (`my_label_1`) to display the first image. The label is positioned on the grid in the first row, spanning three columns. 6. Functions for handling buttons: def btn_forw(): global img_num # If the current image is the first, disable the previous button if img_num == 1: prev_btn.config(state=DISABLED) # Increment the image number img_num += 1 # Check if we have reached the end of the image list if img_num > len(my_images): img_num = 1 # Wrap around to the first image # Update the display update_display() # Enable or disable navigation buttons based on the current image number prev_btn.config(state=NORMAL if img_num > 1 else DISABLED) forw_btn.config(state=NORMAL if img_num < len(my_images) else DISABLED) The function uses the global keyword to access and modify the global variable img_num. It first checks if the current image is the first one. If so, it disables the previous button (prev_btn) since there's no previous image. The image number is then incremented, and the function checks if it has reached the end of the image list. If so, it wraps around to the first image for a continuous loop. The update_display() function is called to refresh the displayed image. Finally, the state of the previous and forward buttons is adjusted based on the current image number to enable or disable them accordingly. def btn_prev(): global img_num # If the current image is the last, disable the forward button if img_num == len(my_images): forw_btn.config(state=DISABLED) # Decrement the image number img_num -= 1 # Check if we have reached the start of the image list if img_num < 1: img_num = len(my_images) # Set to the last image # Update the display update_display() # Enable or disable navigation buttons based on the current image number prev_btn.config(state=NORMAL if img_num > 1 else DISABLED) forw_btn.config(state=NORMAL if img_num < len(my_images) else DISABLED) Similar to btn_forw(), this function uses the global keyword to access and modify the global variable img_num. It checks if the current image is the last one. If so, it disables the forward button (forw_btn) since there's no next image. The image number is decremented, and the function checks if it has reached the start of the image list. If so, it sets the image number to the last image for a continuous loop. The update_display() function is called to refresh the displayed image. Finally, the state of the previous and forward buttons is adjusted based on the current image number to enable or disable them accordingly. def update_display(): my_label_1.configure(image=my_images[img_num - 1]) img_num_label.config(text=f"Image {img_num}/{len(my_images)}") my_label_1 is updated with the image from my_images at the current index (img_num - 1). The text of img_num_label is updated to reflect the current image number out of the total number of images. 7. Navigation Buttons: prev_btn = Button(root, text="<<", command=btn_prev, state=DISABLED) exit_btn = Button(root, text="Exit app", command=root.quit) forw_btn = Button(root, text=">>", command=btn_forw) # Three buttons are created for navigation – moving backward, quitting the application, and moving forward through the images. 8. Grid Placement for Buttons: prev_btn.grid(row=1, column=0) exit_btn.grid(row=1, column=1) forw_btn.grid(row=1, column=2) # The navigation buttons are positioned on the grid in the second row. 9. Image Counter Label: img_num_label = Label(root, text=f"Image {img_num}/{len(my_images)}") img_num_label.grid(row=2, column=0, columnspan=3) # A label (`img_num_label`) is created to display the current image number out of the total number of images. It is placed in the third row, spanning three columns. 10. Slideshow Feature: def start_slideshow(): btn_forw() # Starts slideshow from the current image root.after(1000, start_slideshow) # Change image every 1000 milliseconds (1 second) start_slideshow_btn = Button(root, text="Start Slideshow", command=start_slideshow) start_slideshow_btn.grid(row=3, column=0, columnspan=3) # The script defines a function `start_slideshow` that automatically advances to the next image at regular intervals. The function is triggered by the "Start Slideshow" button. 11. Main Event Loop: root.mainloop() # This line initiates the main event loop of the Tkinter application, ensuring the graphical user interface remains responsive. Congratulations on creating an amazing project! Keep progressing and create even more :)
- Pandemic at a perspective: What comes next
Author: Himanshu Sadulwad Prevention of any disease is largely based on the epidemiology of the disease. To effectively lower the number of COVID-19 cases globally, special emphasis has been placed on these epidemiological studies which help us to find the underlying causes of the surge in cases. Variants of Concern Like other viruses, COVID-19 does evolve over time. Most mutations in the SARS-CoV-2 genome have negligible impact on viral functions. But some variants have gained attention due to their rapid emergence, transmission and clinical implications. These are termed as variants of concern. Early in the pandemic, a study which monitored amino acid changes in the spike protein of SARS-CoV-2 identified a D614G (glycine for aspartic acid) substitution that became the dominant polymorphism globally over time. In animal and in vitro studies, viruses bearing the G614 polymorphism demonstrate higher levels of infectious virus in the respiratory tract, enhanced binding to ACE-2, and increased replication and transmissibility compared with the D614 polymorphism. The G614 variant does not appear to be associated with a higher risk of hospitalization or to impact anti-spike antibody binding. It is now present in most circulating SARS-CoV-2 lineages. Alpha (B.1.1.7 lineage) This variant was first reported in the UK in December 2020. It became the globally dominant variant until the emergence of the Delta variant. This variant includes 17 mutations in its viral genome. 8 of these are in spike proteins. This results in an increased affinity of the protein to ACE 2 receptors which enhances the viral attachment and subsequent entry into host cells. This variant was reported to be 43% to 82% more transmissible than the preexisting variants. It was also associated with an increased in mortality compared to other variants. Beta (B.1.315 lineage) The Beta variant was first identified in South Africa in October 2020. It includes 9 mutations in the spike proteins out of which 3 increase the binding to ACE receptors. The main concern with this variant was immune evasion as it had reduced neutralization by monoclonal antibody therapy, convalescent sera and post-vaccination sera. Gamma (P.1 lineage) The third variant of concern was identified in December 2020 in Brazil. It harbors 10 mutations in the spike proteins. It did not become a globally dominant variant. Delta (B.1.617.2 lineage) This variant was initially identified in December 2020 in India and was responsible for the deadly second wave of COVID-19 infections in April 2021. It harbors 10 mutations in its spike proteins. Compared to the Alpha variant, the Delta variant was more transmissible and associated with a higher risk of severe disease and hospitalization. Omicron (B.1.1.529 lineage) This variant was first identified by the WHO in South Africa on 23 November 2021 after a sharp rise in the number of COVID-19 cases. It was quickly recognized as a VOC due to more than 30 changes to the spike protein of the virus. Initial modeling suggests that it shows a 13-fold increase in viral infectivity and is 2.8 times more infectious than the Delta variant. It is also reported to evade infection and vaccine induced humoral immunity to a greater extent than prior variants. Immune evasion Omicron appears to escape humoral immunity and to be associated with a higher risk of reinfection in individuals previously infected with a different strain. These observations are further supported by findings from several laboratories, in which sera from individuals with prior infection or prior vaccination did not neutralize Omicron as well as other variants; in some cases, neutralizing activity against Omicron was undetectable in convalescent as well as post-vaccination sera. Severity of disease Observational data suggest that the risk of severe disease with Omicron infection is lower than with other variants. An analysis from England estimated that the risk of hospital admission or death with Omicron was approximately one-third that with Delta, adjusted for age, sex, vaccination status, and prior infection. The reduced risk for severe disease may reflect partial protection conferred by prior infection or vaccination. However, animal studies that show lower viral levels in lung tissue and milder clinical features (eg, less weight loss) with Omicron compared with other variants provide further support that Omicron infection may be intrinsically less severe. On the other hand, even if the individual risk for severe disease with Omicron is lower than with prior variants, the high number of associated cases can still result in high numbers of hospitalizations and excess burden on the health care system. Omicron Sublineages The original Omicron variant is sublineage BA.1. Sublineage BA.2, which differs by approximately 40 mutations, demonstrates a replication advantage compared with BA.1 and accounts for the majority of Omicron sequences globally. Vaccine efficacy appears largely similar for BA.2 in comparison to BA.1. Although reinfections with BA.2 in individuals with prior BA.1 infection occur, they have been rare and mainly in unvaccinated individuals. Accordingly, BA.1 infection in vaccinated individuals appears to elicit neutralizing antibodies that have potent activity against BA.2 as well. Other variants within the Omicron lineage include recombinant variants (eg, XE, which is a combination of BA.1 and BA.2) and new sublineages (eg, BA.2.12.1, BA.4, BA.5). Some of these appear to have a replication advantage compared with other Omicron sublineages; however it is unknown whether their impact on disease severity or immune escape differs from that of other Omicron sublineages. Variants of Interest VOIs are defined as variants with specific genetic markers that have been associated with changes that may cause enhanced transmissibility or virulence, reduction in neutralization by antibodies obtained through natural infection or vaccination, the ability to evade detection, or a decrease in the effectiveness of therapeutics or vaccination. So far since the beginning of the pandemic, the WHO has described eight variants of interest (VOIs), namely Epsilon (B.1.427 and B.1.429) Zeta (P.2) Eta(B.1.525) Theta (P.3) Iota(B.1.526) Kappa(B.1.617.1) Lambda(C.37) Mu (B.1.621). Epidemiology Since the first cases of COVID-19 were reported in Wuhan, Hubei Province, China, in December 2019 and the subsequent declaration of COVID-19 as a global pandemic by the WHO in March 2020, this highly contagious infectious disease has spread to 223 countries with more than 281 million cases, and more than 5.4 million deaths reported globally. This reported case counts underestimate the burden of COVID-19, as only a fraction of acute infections are diagnosed and reported. Seroprevalence surveys in the United States and Europe have suggested that after accounting for potential false positives or negatives, the rate of prior exposure to SARS-CoV-2, as reflected by seropositivity, exceeds the incidence of reported cases by approximately 10-fold or more. Persons of all ages are at risk for infection and severe disease. However, patients aged ≥60 years and patients with underlying medical comorbidities such as obesity, cardiovascular disease, chronic kidney disease, diabetes, chronic lung disease, smoking, cancer, solid organ or hematopoietic stem cell transplant patients are at an increased risk of developing severe COVID-19 infection. In fact, the percentage of COVID-19 patients requiring hospitalization was six times higher in those with preexisting medical conditions than those without medical conditions. Notably, the percentage of patients who succumbed to this viral illness was 12 times higher in those with preexisting medical conditions than those without medical conditions. Data regarding the gender-based differences in COVID-19 suggests that male patients are at risk of developing severe illness and increased mortality due to COVID-19 compared to female patients. Similarly, the severity of infection and mortality related to COVID-19 differs between different ethnic groups. Will it ever end? Since December 2019, this virus has been wreaking havoc around the globe disrupting all walks of life. While our healthcare systems work tirelessly to provide treatment, researchers have stepped up to help solve the situation. Although IHME models suggest that global daily SARS-CoV-2 infections have increased by more than 30 times from the end of November, 2021 to Jan 17, 2022, reported COVID-19 cases in this period have only increased by six times because the proportion of cases that are asymptomatic or mild has increased compared with previous SARS-CoV-2 variants, the global infection-detection rate has declined globally from 20% to 5%. Despite the reduced disease severity per infection, the massive wave of omicron infections means that hospital admissions are increasing in many countries and will rise to twice or more the number of COVID-19 hospital admissions of past surges in some countries according to the IHME models. So the question stands, when will this be over? Pandemics do not end overnight with a parade or some armistice. Usually, the virus evolves to a less severe variety, the majority of the population develop resistance to it and the disease fades into the background. If this happens the era of extraordinary measures by governments to control the transmission of the disease will be over. The pandemic may end but COVID-19 will return. References Giovanetti M, Benedetti F, Campisi G, Ciccozzi A, Fabris S, Ceccarelli G, Tambone V, Caruso A, Angeletti S, Zella D, Ciccozzi M. Evolution patterns of SARS-CoV-2: Snapshot on its genome variants. Biochem Biophys Res Commun. 2021 Jan 29;538:88-91. Korber B, Fischer WM, Gnanakaran S, Yoon H, Theiler J, Abfalterer W, Hengartner N, Giorgi EE, Bhattacharya T, Foley B, Hastie KM, Parker MD, Partridge DG, Evans CM, Freeman TM, de Silva TI, Sheffield COVID-19 Genomics Group. McDanal C, Perez LG, Tang H, Moon-Walker A, Whelan SP, LaBranche CC, Saphire EO, Montefiori DC. Tracking Changes in SARS-CoV-2 Spike: Evidence that D614G Increases Infectivity of the COVID-19 Virus. Cell. 2020 Aug 20;182(4):812-827.e19. Volz E, Mishra S, Chand M, Barrett JC, Johnson R, Geidelberg L, Hinsley WR, Laydon DJ, Dabrera G, O'Toole Á, Amato R, Ragonnet-Cronin M, Harrison I, Jackson B, Ariani CV, Boyd O, Loman NJ, McCrone JT, Gonçalves S, Jorgensen D, Myers R, Hill V, Jackson DK, Gaythorpe K, Groves N, Sillitoe J, Kwiatkowski DP, COVID-19 Genomics UK (COG-UK) consortium. Flaxman S, Ratmann O, Bhatt S, Hopkins S, Gandy A, Rambaut A, Ferguson NM. Assessing transmissibility of SARS-CoV-2 lineage B.1.1.7 in England. Nature. 2021 May;593(7858):266-269 Aleem A, Akbar Samad AB, Slenker AK. Emerging Variants of SARS-CoV-2 And Novel Therapeutics Against Coronavirus (COVID-19) [Updated 2022 May 12]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2022 Jan-. https://doi.org/10.1016/S0140-6736(22)00100-3 Hale, T. et al. Nat. Hum. Behav. 5, 529–538 (2021). (This is the third article in the series, Pandemic at a perspective)
- The race for a cure
Author: Himanshu Sadulwad If one gets infected with COVID-19, not all hope is lost. There are treatment options available to treat the disease based on the severity and the strain which has infected the individual. Stages of the disease Before listing the options available for treatment let us first take a look at the different stages through which this disease progresses. The National Institutes of Health (NIH) issued guidelines that classify COVID-19 into five distinct types. Asymptomatic or presymptomatic infection: Individuals with positive SARS-CoV-2 test without any clinical symptoms consistent with COVID-19 Mild illness: Individuals who have any symptoms of COVID-19 such as fever, cough, sore throat, malaise, headache, muscle pain, nausea, vomiting, diarrhea, anosmia (loss of smell), or dysgeusia (altered taste) but without shortness of breath or abnormal chest imaging. Moderate illness: Individuals who have clinical symptoms or radiologic evidence of lower respiratory tract disease and who have oxygen saturation (SpO2) ≥ 94% on room air. Severe illness: Individuals who have (SpO2) ≤ 94% on room air; a ratio of partial pressure of arterial oxygen to fraction of inspired oxygen, (PaO2/FiO2) <300 with marked tachypnea with respiratory frequency >30 breaths/min or lung infiltrates >50%. Critical illness: Individuals who have acute respiratory failure, septic shock, and/or multiple organ dysfunction. Patients with severe COVID-19 illness may become critically ill with the development of acute respiratory distress syndrome (ARDS) which tends to occur approximately one week after the onset of symptoms. Medicines to treat COVID-19 Currently, a variety of therapeutic options are available that include antiviral drugs, anti-SARS-CoV-2 monoclonal antibodies, anti-inflammatory drugs, immunomodulators agents are available under FDA issued Emergency Use Authorization( EUA) or being evaluated in the management of COVID-19. The clinical utility of these treatments is specific and is based on the severity of illness or certain risk factors. The clinical course of the COVID-19 illness occurs in 2 phases, an early phase when SARS-CoV-2 replication is greatest before or soon after the onset of symptoms. Antiviral medications and antibody-based treatments are likely to be more effective during this stage of viral replication. The later phase of the illness is driven by a hyperinflammatory state induced by the release of cytokines and the coagulation system’s activation that causes a prothrombotic state. Anti-inflammatory drugs such as corticosteroids, immunomodulating therapies, or a combination of these therapies may help combat this hyperinflammatory state than antiviral therapies. Antiviral therapies: Molnupiravir Named after the Norse God Thor's hammer Mjölnir, molnupiravir is a directly acting broad-spectrum oral antiviral agent acting on the RdRp enzyme was initially developed as a possible antiviral treatment for influenza, alphaviruses including Eastern, Western, and Venezuelan equine encephalitic viruses. Based on meta-analysis of available phase 1-3 studies, molnupiravir was noted to demonstrate a significant reduction in hospitalization and death in mild COVID-19 disease. Results from a phase 3 double-blind randomized placebo controlled trial reported that early treatment with molnupiravir reduced the risk of hospitalization or death in at-risk unvaccinated adults with mild-to-moderate, laboratory-confirmed Covid-19. Paxlovid It consists of ritonavir in combination with nirmatrelvir. It is an oral combination pill of two antiviral agents which on an interim analysis of phase 2-3 data, found that the risk of COVID-19 related hospital admission or all-cause mortality was 89% lower in the paxlovid group when compared to placebo when started within three days of symptom onset. FDA approved Paxlovid for patients with mild to moderate disease. Remdesivir It is a broad-spectrum antiviral agent that previously demonstrated antiviral activity against SARS-CoV-2 in vitro.Based on results from three randomized, controlled clinical trials that showed that remdesivir was superior to placebo in shortening the time to recovery in adults who were hospitalized with mild-to-severe COVID-19, the FDA approved remdesivir for clinical use in adults and pediatric patients. However, results from the WHO SOLIDARITY Trial conducted at 405 hospitals spanning across 40 countries involving 11,330 inpatients with COVID-19 who were randomized to receive remdesivir (2750) or no drug (4088) found that remdesivir had little or no effect on overall mortality, initiation of mechanical ventilation, and length of hospital stay. There is no data available regarding the efficacy of remdesivir against the new SARS-CoV-2 variants; however, acquired resistance against mutant viruses is a potential concern. Hydroxychloroquine and chloroquine These were proposed as antiviral treatments for COVID-19 initially during the pandemic. However, data from randomized control trials evaluating the use of hydroxychloroquine with or without azithromycin in hospitalized patients did not improve the clinical status or overall mortality compared to placebo. Data from randomized control trials of hydroxychloroquine used as postexposure prophylaxis did NOT prevent SARS-CoV-2 infection or symptomatic COVID-19 illness. Lopinavir/Ritonavir It is an FDA-approved combo therapy for the treatment of HIV and was proposed as antiviral therapy against COVID-19 during the early onset of the pandemic. Data from a randomized control trial that reported NO BENEFIT was observed with lopinavir-ritonavir treatment compared to standard of care in patients hospitalized with severe COVID-19. It is currently not indicated for the treatment of COVID-19 in hospitalized and nonhospitalized patients. Ivermectin It is an FDA-approved anti-parasitic drug used worldwide in the treatment of COVID-19 based on an in vitro study that showed inhibition of SARS-CoV-2 replication. A single-center double-blind, randomized control trial involving 476 adult patients with mild COVID-19 illness was randomized to receive ivermectin 300 mcg/kg body weight for five days or placebo did NOT achieve significant improvement or resolution of symptoms. Ivermectin is currently not indicated for the treatment of COVID-19 in hospitalized and nonhospitalized patients. Anti-SARS-CoV-2 Neutralizing Antibody Products: Individuals recovering from COVID-19 develop neutralizing antibodies against SARS-CoV-2, and the duration of how long this immunity lasts is unclear. Nevertheless, their role as therapeutic agents in the management of COVID-19 is extensively being pursued in ongoing clinical trials. Convalescent Plasma therapy This therapy was evaluated during the SARS, MERS, and Ebola epidemics; however, it lacked randomized control trials to back its actual efficacy. The FDA approved convalescent plasma therapy under a EUA for patients with severe life-threatening COVID-19. Although it appeared promising, data from multiple studies evaluating the use of convalescent plasma in life-threatening COVID-19 has generated mixed results. REGN-COV2 It is an antibody cocktail containing two noncompeting IgG1 antibodies (casirivimab and imdevimab) that target the RBD on the SARS-CoV-2 spike protein that has been shown to decrease the viral load in vivo, preventing virus-induced pathological sequelae when administered prophylactically or therapeutically in non-human primates. Results from an interim analysis of 275 patients from an ongoing double-blinded trial involving non hospitalized patients with COVID-19 who were randomized to receive placebo, reported that the REGN-COV2 antibody cocktail reduced viral load compared to placebo. This interim analysis also established the safety profile of this cocktail antibody, similar to that of the placebo group. Bamlanivimab and Etesevimab These are potent anti-spike neutralizing monoclonal antibodies. Bamlanivimab is a neutralizing monoclonal antibody derived from convalescent plasma obtained from a patient with COVID-19. Like REGN-COV2, it also targets the RBD of the spike protein of SARS-CoV-2 and has been shown to neutralize SARS-CoV-2 and reduce viral replication in non-human primates. In Phase 2 of the BLAZE-1 trial, bamlanivimab/etesevimab was associated with a significant reduction in SARS-CoV-2 viral load compared to placebo Sotrovimab It is a potent anti-spike neutralizing monoclonal antibody that demonstrated in vitro activity against all the four VOCs. Results from a preplanned interim analysis(not yet peer-reviewed) of the multicenter, double-blind placebo-controlled Phase 3, demonstrated that one dose of sotrovimab (500 mg) reduced the risk of hospitalization or death by 85% in high-risk non hospitalized patients with mild to moderate COVID-19 compared with placebo. Immunomodulatory Agents: Corticosteroids Severe COVID-19 is associated with inflammation-related lung injury driven by the release of cytokines characterized by an elevation in inflammatory markers. The Randomized Evaluation of Covid-19 Therapy (RECOVERY) trial, which included hospitalized patients with clinically suspected or laboratory-confirmed SARS-CoV-2 who were randomly assigned to received dexamethasone or usual care showed that the use of dexamethasone resulted in lower 28-day mortality in patients who were on invasive mechanical ventilation or oxygen support but not in patients who were not receiving any respiratory support. Interferon-ß-1a Interferons are cytokines that are essential in mounting an immune response to a viral infection, and SARS-CoV-2 suppresses its release in vitro.[123] However, previous experience with IFN- β-1a in acute respiratory distress syndrome (ARDS) has not benefited. Currently, there is no data available regarding the efficacy of interferon β-1a on the four SARS-CoV-2 VOCs Alpha (B.1.1.7), Beta (B.1.351), Gamma(P1), and Delta (B.1.617.2). Given the insufficient and small amount of data regarding this agent’s use and the relative potential for toxicity, this therapy is NOT recommended to treat COVID-19 infection. Interleukin (IL)-1 Antagonists Anakinra is an interleukin-1 receptor antagonist that is FDA approved to treat rheumatoid arthritis. Its off-label use in severe COVID-19 was assessed in a small case-control study trial based on the rationale that the severe COVID-19 is driven by cytokine production, including interleukin (I.L.)-1β. This trial revealed that of the 52 patients who received anakinra and 44 patients who received standard of care, anakinra reduced the need for invasive mechanical ventilation and mortality in patients with severe COVID-19. Anti-IL-6 receptor Monoclonal Antibodies Interleukin-6 (IL-6) is a proinflammatory cytokine that is considered the key driver of the hyperinflammatory state associated with COVID-19. Targeting this cytokine with an IL-6 receptor inhibitor could slow down the process of inflammation based on case reports that showed favorable outcomes in patients with severe COVID-19. The FDA approved three different types of IL-6 receptor inhibitors for various rheumatological conditions (Tocilizumab, Sarilumab) and a rare disorder called Castleman’s syndrome (Siltuximab). Need for a vaccine The most fruitful way to prevent the spread of any disease would be to immunize the population towards the disease. This is where vaccines come into the picture. But to create a vaccine against a viral disease whose pathogen has the ability to mutate is quite tricky. Nonetheless multiple such efforts using various techniques have been successful and approved by the FDA for use. Let us take a look at them BNT162b2 vaccine (Tonizameran) This mRNA based vaccine has been developed by BioNTech/Pfizer. Individuals 16 years of age or older receiving two-dose regimen the vaccine when given 21 days apart conferred 95% protection against COVID-19 with a safety profile similar to other viral vaccines. Based on the results of this vaccine efficacy trial, the FDA issued a EUA on December 11, 2020, granting the use of the BNT162b2 vaccine to prevent COVID-19. mRNA-1273 vaccine This mRNA based vaccine has been developed by Moderna. Results from another multicenter, Phase 3, randomized, observer-blinded, placebo-controlled trial demonstrated that individuals who were randomized to receive two doses of the vaccine given 28 days apart showed 94.1% efficacy at preventing COVID-19 illness and no safety concerns were noted besides transient local and systemic reactions. Based on the results of this vaccine efficacy trial, the FDA issued a EUA on December 18, 2020, granting the use of the mRNA-1273 vaccine to prevent COVID-19. Ad26.COV2.S vaccine (Johnson and Johnson vaccine) It received EUA by the FDA on February 27, 2021, based on the results of an international multicenter, randomized,placebo-controlled multicenter, phase 3 trial showed that a single dose of Ad26.COV2.S vaccine conferred 73.1% efficacy in preventing COVID-19 in adult participants who were randomized to receive the vaccine. ChAdOx1 nCoV-19 vaccine (Covishield) It has a clinical efficacy of 70.4% against symptomatic COVID-19 after two doses and 64 % protection against COVID-19 after at least one standard dose. This too has been approved by multiple countries for use. In addition to the vaccines mentioned above, as many as seven other vaccines, including protein-based and inactivated vaccines, have been developed indigenously in India(Covaxin), Russia(Sputnik V), and China(CoronaVac) and have been approved or granted emergency use authorization to prevent COVID-19 in many countries around the world. References Cascella M, Rajnik M, Aleem A, et al. Features, Evaluation, and Treatment of Coronavirus (COVID-19) [Updated 2022 Jan 5]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2022 Jan- Copyright © 2021 Rodriguez-Guerra M, Jadhav P, Vittorio TJ. https://doi.org/10.7573/dic.2020-10-3. Published by Drugs in Context under Creative Commons License Deed CC BY NC ND 4.0. Stasi, Cristina et al. “Treatment for COVID-19: An overview.” European journal of pharmacology vol. 889 (2020): 173644. doi:10.1016/j.ejphar.2020.173644 Wu K, Werner AP, Moliva JI, Koch M, Choi A, Stewart-Jones GBE, Bennett H, Boyoglu-Barnum S, Shi W, Graham BS, Carfi A, Corbett KS, Seder RA, Edwards DK. mRNA-1273 vaccine induces neutralizing antibodies against spike mutants from global SARS-CoV-2 variants. bioRxiv. 2021 Jan 25 ( This article is second in the series of articles 'Pandemic at a perspective')