Is The Iran Nuclear Deal a Good Deal?

Last week, 20 months of negotiations between 7 different countries came to fruition. The Joint Comprehensive Plan of Action (JCPOA) was signed by the US, China, Russia, UK, France, Germany, and Iran. The Iran Nuclear Deal, as it has been popularized, is a groundbreaking event in diplomacy with one of the most volatile nations in an area of the world that is historically unstable. The JCPOA has many confused about not only the details, but also the general concepts of the plan. I will try to keep jargon low and explained, so that this post will, hopefully, dispel some of the confusion.

Here are some terms for those unfamiliar with the uranium enrichment process:
Isotope – A variant of an element that has a particular mass (number of neutrons + protons). Heavier and odd numbered isotopes tend to be less stable.

Uranium 235 – The uranium isotope that is easily split (fissile) to produce energy. Uranium ore contains 0.7% U-235.

Uranium 238 – A stable uranium isotope that is not fissile and is not used for energy. Uranium ore contains 99.3% U-238.

Plutonium 239 – A fissile byproduct of nuclear reactors

Heavy Water – Water molecules that contain Deuterium, which is a stable isotope of Hydrogen containing an extra neutron, giving it greater mass.

Low Enriched Uranium – Uranium with less than 20% concentration of U-235.

Highly enriched Uranium – Uranium with greater than 20% concentration of U-235.

Uranium Hexafluoride (UF6) – Uranium that is bound to 6 fluorides. This form of Uranium is necessary for enrichment with the gas centrifuges.

Uranium Dioxide – Uranium bonded with two Oxygens. This form of Uranium is packed into fuel rods and used as fuel for nuclear reactors.

Beta decay – A neutron can be seen as a proton and an electron combined. During beta decay, the neutron emits the electron (referred to as a beta particle, hence beta decay), which effectively turns the neutron into a proton, thus changing the element into a new element (one that is immediately after it on the periodic table). For example, if Carbon (#6 on the table) beta decayed, it would become Nitrogen (#7). This occurs in the atmosphere as a part of the Carbon 14 cycle.

The most common thing I’ve heard regarding the deal is that people are uncomfortable with the Iran having nuclear power “now.” This, I assume, stems from a misunderstanding of what the deal was designed to do. The JCPOA doesn’t give Iran anything; Iran has had a nuclear program for years, and has been enriching uranium to amounts that are pushing the boundaries of normal energy usage. The JCPOA will require Iran to do a few things to reduce the chances of them creating a nuclear weapon, which I will explain one by one:

  • Reduce their current uranium stockpile by about 96%
  • No Uranium enrichment beyond 3.67% for 15 years
  • Use only ~ 5000 of the lowest efficiency centrifuges (out of about 19,000) for the next 10 years.
  • Redesign the Arak heavy water reactor
  • No building heavy water reactors or stockpiling of heavy water for 15 years
  • Allow comprehensive and unprecedented international inspections of facilities by the International Atomic Energy Agency (IAEA)
  • Convert the underground Fordow nuclear facility into a nuclear, physics, and technology center where international scientists will also be stationed
  • Ship spent fuel to other countries
  • In return for the above tenets being met, economically crippling sanctions by the UN, EU, US, and possibly other individual countries, will be lifted.

Let’s start with the first – Stockpile reduction:

This one seems to be an obvious win. Iran has around 20,000 lbs of low enriched uranium (~5% U-235) stockpiled. With this provision in place, they would be reduced to 660 lbs of low enriched uranium on hand. The reduction would be done by either shipping the uranium out of the country or diluting it. Iran also held about 460 lbs of 20% enriched Uranium. Since January, 238.5 lbs of this has been diluted to less than 5% enrichment. A little over a pound was retained by the IAEA for reference, a fraction of a pound was taken by the IAEA for sampling, and the remaining 220 lbs is in the process of being converted into Uranium dioxide, which is used for fuel rods. Research reactors, like the one what Tehran, run on fuel rods with 20% enriched uranium. Iran’s Fuel Plate Fabrication Plant has no process line by which the oxide can be converted back to UF6 to be further enriched.

3.67% enrichment cap:

The percentage of U-235 (enrichment level) in your Uranium says a lot about your intentions. Uranium that is enriched to 3-5% is used in regular nuclear reactors for energy production. Uranium at 20% enrichment is often used for research and production of medical isotopes. Iran claims that it has enriched uranium to 20% in order to supply the Tehran reactor for production of medical isotopes. This is actually not an unreasonable claim. The last shipment of 20% uranium into the country was in 1992 by Argentina. This would last about 20 years at the most, so Iran does need 20% enriched uranium to continue production of medical isotopes that are used in everything from radiation treatment to medical imaging.

Cut in centrifuge use:

The details on the types of centrifuges and their usage are some of the more complex parts of the JCPOA. However, the main points are pretty straightforward. The gas centrifuges used to enrich uranium are a little different than the typical scientific centrifuge. These centrifuges use diffusion of gaseous UF6 (see terms above) to separate the lighter U-235 from the heavier U-238. This process isn’t too efficient, particularly with the old equipment that Iran would be required to use. Successful enrichment, even to 3.67%, requires an assembly line of centrifuges, where the products of one centrifuge becomes the reactants of another. Keep in mind that uranium ore contains less than 1% U-235. Under the JCPOA, Iran would only be allowed to use about 6000 of their almost 20,000 centrifuges. Is this enough to make a bomb? Sure. I suppose 600 would be enough. However, the point is to make is difficult – and overt – for Iran to enrich uranium to weapons grade.

Redesigning the Arak Heavy Water Reactor:

The details on this are vague as of now. Supposedly, the reactor core will be filled with concrete and then redesigned according to UN regulations with the help of international scientists. Claims are that this will help reduce the potential of Plutonium being produced in high quantities. I’m not entirely sure what kind of redesign would significantly reduce this potential, other than the fact that heavy water reactors do not require enriched uranium. Because the water is “heavy,” the reaction process is much more efficient. Heavy water already has extra neutrons, and so it is less likely to absorb the neutrons that are used to split U-235. Thus, your concentration of U-235 doesn’t need to be as high to achieve efficiency. A consequence of low-concentration U-235 is over 99% concentration of U-238. U-238 doesn’t split easily, so it tends to absorb neutrons, which will be in even higher abundance if the water isn’t absorbing them. When U-238 absorbs a neutron, it becomes U-239, which is unstable and beta decays (see terms for info) into Neptunium 239. Neptunium 239 is also unstable, so it beta decays into Plutonium 239, which can be used as fuel in the same way as U-235 if left in the fuel rod. However, Plutonium 239 can be removed as it is created and replaced with more Uranium. This is how Weapons grade Plutonium is stockpiled. Fortunately, this shouldn’t be a difficult thing for IAEA to monitor, as the inspectors will know how much should be present. Much of the success of this deal will fall on how well the inspectors do their jobs.

No stockpiling heavy water or building heavy water reactors for 15 years:

This follows the previous point. Not only will Arak be redesigned, but Iran will not be allowed to build or collect material (heavy water) to build a heavy water reactor for 15 years.

Inspections

This part of the deal is a bit vague as well. However, it is one of the most important aspects. Iran is essentially on probation right now, and the IAEA is its probation officer. If Iran does anything wrong, sanctions, the levying of which are the main reason Iran is trying to make a deal, will immediately go into effect. It would be counterintuitive for them to break the rules overtly, and should be relatively easy to catch if they try to do so covertly. IAEA inspectors will have the ability to inspect not only current reactors and research (not to mention the monitoring or uranium mining and import), but will also be able to inspect “suspicious” areas. There is an appeals committee, and it could take up to a maximum of 4 weeks if Iran claims the inspection unnecessary. However, let’s be real. The US and the rest of the world’s intelligence will be all over any suspicions of the IAEA inspectors. If it’s happening, especially on any scale that could be dangerous, we will find out. The last thing Iran wants is to be resanctioned and show that it cannot be trusted under any circumstances. Even a bad kid does what’s in his or her best interest.

Converting Fordow into a research center:

Fordow is a heavily fortified, underground nuclear reactor. Under the JCPOA, Iran will not enrich any Uranium at Fordow, will convert it to a research center, and will allow international scientists to be stationed there. So, not only will IAEA have inspection capabilities, but the world will have scientific eyes inside of this facility, further reducing any chances of covert, illegitimate activity.

Shipping off spent fuel:

Spent fuel rods are where you get Plutonium 239, as described previously. Under the JCPOA, Iran will ship spent fuel rods out of the country for the lifetime of the Arak reactor, and will not build a reprocessing facility (necessary to separate out plutonium) for 15 years.

Sanctions will be lifted:

Economic sanctions from the US, EU, and UN, as well as other independent countries, has crippled Iran’s economy. These sanctions include heavily restricted imports and exports on many things, including oil, which is one of Iran’s biggest exports. Additionally, Iran has over $100 billion in frozen assets overseas, and was banned from participating in the international banking system. The economic sanctions crippled Iran for many years, deteriorating the quality of life for citizens as collateral damage. The sanctions will be lifted as Iran continues to show cooperation, allowing Iran to prove to the rest of the world that is can be a legitimate part of world trade.

Iran has been in “prison” the last decade or so. They have been showing good behavior through diluting uranium stockpiles even before last weeks agreement was reached. They are now essentially on probation for 15 years. This can be analogous to a recently released prisoner. You don’t just set them free; they do their time and then you assign them a probation officer – in this case it’s the IAEA. If the person shows good behavior and a willingness to be a contributing member of society, they will be allowed more freedom. This is where Iran is at with the JCPOA. This is why it’s a 15 year deal. Iran has 15 years to prove to the world that they can be a participating country in global interactions. The world will have 15 years to learn about Iran’s capabilities and prepare in the event that they break their probation. But, just as a prisoner wants nothing more than to avoid going back to prison, Iran wants nothing more than to avoid sanctions. This deal gives us a chance to form a somewhat diplomatic relationship with a country that, in the past, has been difficult to negotiate with. ISIS is also one of Iran’s biggest enemies, and this diplomatic relationship might help curtail them, but that is a topic for another post. Will this fix all the problems in the Middle East? No. Is Iran our ally now? Absolutely not. Ultimately, this deal lowers the chance of Iran creating a nuclear bomb, gives them a chance to demonstrate their ability to cooperate and participate in global affairs, and is a step closer to stabilizing the Middle East.

For those of you who are still wanting to use military action against Iran (because the West’s military interventions in the Middle East have been SO successful in the past) instead of trying diplomacy first, please read the document in the link below. It is an assessment of the pros and cons of military intervention in Iran by one of the most well regarded and respected think tank organizations in the world.

http://www.wilsoncenter.org/sites/default/files/IranReport_091112_FINAL.pdf

Why Cultural Appropriation Matters

Cultural appropriation is a tricky topic to unpack and explain in a manner that keeps the attention of those who believe it to be “PC crap,” but also doesn’t dampen the significance of the issue. But we should try anyway.

I’ve no doubt played a role in cultural appropriation throughout my life, with no bad intentions or awareness that I was doing anything harmful. Growing up in okla humma, Choctaw for “Red People,” I was surrounded by Native American culture. Half of the cities I can name in Oklahoma derive from a Native American word or phrase in the language of one of the 67 tribes represented in the state. You can buy dream catchers and arrowheads at gas stations along the interstate, and Oklahoma museums have some of the largest Native American collections in the world. The designation of Oklahoma as Indian Territory in the 19th century laid the foundation for the incredibly complex and muddled mixing of unique cultures that white people typically lump into “Native American” culture. This amalgamated meta-culture, if you will, has been commodified into a staple of Oklahoma tourist attractions and local affairs. To those born here, the combined Native American culture is a frequent part of every day life, even though many don’t understand the significance of the cultural artifacts in their original context.

Continue reading Why Cultural Appropriation Matters

Why Can’t Rachel Dolezal be Black?

The news of Rachel Dolezal as someone who has “pretended” to be black came to light at an interesting time. A few weeks ago, former decathlon gold medalist Bruce Jenner, came out as a trans-woman, henceforth identifying as Caitlyn Jenner. It is broadly accepted among academics that gender and sex are not the same thing; sex is a biological reality, and gender is a social role that someone fills in society. While biological sex tends to be binary (male/female, with the exception of things such as intersex), gender can be seen as more of a spectrum. So how does this relate to race, or does it?

As a preface, I am not suggesting that gender is culturally equivalent to race, though both are cultural constructs and neither are biological realities. Race is a manner in which people are classified by phenotypic characteristics (often skin color), while gender, though often defined by phenotypic characteristics, describes a role in society. Race does not define a societal role. That being said, there are similarities between race and gender insofar as both relate to social identity and both can be seen as a spectrum.

Continue reading Why Can’t Rachel Dolezal be Black?

An Evolutionary Explanation For Why You Wear Glasses

Empirically testing health-related hypotheses formulated through an evolutionary lens can prove to be difficult. The environment and the human experience are radically different from the first 6 million years of human evolution. Living on the edge of human existence and the top end of the techno-scientific scale, we are far removed from the environment to which many of our genes are hypothesized to be properly suited. Fortunately, the human race is a diverse group of individuals who have dispersed across the globe and have acclimated to a variety of circumstances. Accordingly, a few hunter-gatherer societies remain in parts of Africa. Though neither their genes nor their cultures are identical to original hunter-gatherers, they do retain the closest genetic and sociocultural similarity to human ancestors in the modern world. This is not to say that they are “less evolved” than other human societies. This notion is elementary and indicative of evolutionary ignorance. They are very well suited for their habitat, both genetically and culturally. Fortunately, those of us who are less suited for our environments, both genetically and culturally (i.e., everyone else, particularly in the US), can glean incredible insights about the functioning our own bodies and to what dietary and daily circumstances our physiology is best suited.

I recently wrote a primer on evolutionary medicine (which can be found here), which might be beneficial to read before getting into the specifics. This post will focus on myopia, or “near-sightedness,” the visual condition where objects at a distance are out of focus. Myopia affects about 15% of Africans, a third of Americans and Europeans, and over 75% of Asians – a curious bias that I’ll address later in the article. Fortunately (sort-of), myopia is easy to treat with glasses or contacts, and can even be cured to some extent with Laser-Assisted in situ Keratomileusis, commonly known as LASIK. Myopia occurs when the eye is too long, causing the focal point of light to occur prematurely, resulting in a blurry image. As a result, corrective lenses refract the light before it hits the cornea, essentially “overshooting” the refraction. For example, myopic corrective lenses will be thicker on the sides and thinner in the middle, causing the light to spread out slightly more before it hits the cornea, ultimately moving the focal point further back in the eyeball. With LASIK, a high frequency laser is used to vaporize (note: no heat is used. The vaporization is due to the light wavelength) tissue on the center of the cornea, thus reshaping the cornea so that light will be correctly refracted.

63345-004-1DB996D5

In order to focus, the eye depends on ciliary muscles that are attached to the lens. When focusing on something far away, as would often be the case outdoors, the ciliary muscles contract, stretching the lens to a flattened shape. When focusing on something up close, such as a book, television, computer, or phone, the muscles relax, allowing the lens to become more concave. Think of a camera lens: to focus on something far away, you use a longer lens or zoom in. Doing this moves the focal point of distant objects further back, allowing them to be in focus. To take up-close shots you use a macro lens, which is a very short, rounded lens that moves the focal point for near objects closer to the lens. This is how the eye works. Myopia is what happens when your zoom function is broken. Evolution and an analysis of our current sociocultural context might be able to tell us why this happens.

2022820-accomodation

I’m a student, and spend a lot of my time looking at a book, a laptop, or a phone. I love to get outside when I can, but, ultimately, most of my time is spent looking at things up-close. That means that the ciliary muscles in my eye – the zoom muscles – spend most of their time relaxed. Just like any other muscle that goes unused, the ciliary muscle will likely begin to atrophy and become weaker (as far as I’m aware, no quantitative studies have been performed on ciliary muscle size or mitochondrial count, probably because this would be difficult or impossible to do on a living person. Perhaps future studies can examine the ciliary muscles of recently deceased individuals and compare individuals who suffered from myopia with individuals who had normal vision). Over time, particularly if it occurs throughout critical stages of development during childhood, the muscles may become to weak to contract and properly pull the lens flat, thus preventing you from being able to focus on distant objects. Of course, this begs the question of whether or not the muscle be strengthened. I don’t know, and I’m not sure that I am willing to find out by using myself as a guinea pig. Unfortunately, that makes me part of the problem of “dysevolution,” as coined by Harvard paleoanthropologist and human evolutionary biologist, Daniel Lieberman. Dysevolution refers to the circle of treating diseases without trying to change or fix the cause. Our technology and scientific understanding has advanced so rapidly in the past 100 years that we can fix things such as myopia with ease. Often this cycle is perpetuated by comfort. Why change what the way I do things when I can just buy contacts or glasses? My previous post mentions several other possible mismatch diseases, and Lieberman’s book, “The Story of the Human Body,” goes into detail about many of them. For many of them – if not most – we simply ignore the possible cures and instead opt for a more “comfortable” and easy treatment. However, this cycle is sure to grow and intensify as time goes on.

Evolutionary medicine is sometimes difficult to empirically test. However, as mentioned above, modern day hunter-gatherer societies can offer incredible insight and points of comparison for how sociocultural differences may affect our “mismatch diseases.” Studies of this kind are unfortunately few and far between (possibly because research funding also focuses on treatments). However, studies with hunter-gatherer societies have shown that very few members suffer from myopia (as well as many other non-infectious ailments, such as type-2 diabetes, heart disease, osteoporosis, and even cavities). The thought is that they are exposed to a variety of visual stimuli and their visual environment is constantly changing. This “exercises” their ciliary muscles and keeps them strong. Experiments have also shown that animals that are deprived of visual stimuli will grow elongated eyeballs. Similarly, people who spend more time indoors, particularly with studying, as is common in many Asian cultures, exhibit much higher instances of myopia whereas those who spend some time outdoors, as is more common in many African cultures, tend to have a lower rate of myopia. Our eyes did not evolve to see things 2 feet from our face all day long. They evolved to keep up alive from the plethora of visual stimuli in nature and to help us search for food: 2 things that many people, particularly children in developed countries, no longer need to do.

The solution isn’t to give up studying and electronics. It’s much more simple than that. Nearly everyone uses books and electronics, so why doesn’t everyone have myopia? One possibility is genetics, though that doesn’t seem like a plausible explanation. Rates of myopia have only skyrocketed in the last century, and any latent mutation for poor vision would have most certainly been selected against in our ancestors. The likely “cure” for myopia is balance. Spend time outside, especially as a child. The data from lab experiments as well as social statistics seem to point in this direction. If we continue to ignore the cause and only treat the symptoms, we are trapping ourselves in an ever growing cycle in which we become more and more dependent upon technology.

Evolution: The Missing Link in Medicine

“Nothing in biology makes sense except in the light of evolution.”

– Theodosius Dobzhansky

Evolution is arguably one of the most widely supported and powerful theories in all of biology, and potentially science as a whole. It has been a dominant explanation for over 100 years. Once genetics entered the picture in the first part of the 20th century, Darwin’s common descent and Mendel’s inheritance were improved upon, greatly expanded, and solidified into the new synthesis of evolution. Consistently verified through genetics, paleontology, geology, ecology, microbiology, and many other fields of science, evolution has become a pervasively potent field of study. It has created huge disciplinary offshoots – including evolutionary biology, evolutionary genetics, and evolutionary anthropology to name a few – and has become the theoretical foundation for all of biology.

Some people today argue that humans are no longer under evolutionary pressures, and, thus, are no longer evolving. Though this seems to make sense superficially, it is simply not true. The first issue is that humans only live about 80 years; a mere snapshot of our species’  existence. It is difficult to observe phenotypic differences as a result of biological evolution in only a few decades. That being said, scientists have found some very recent biological changes have occurred, including the altered expression of the FTO gene. The FTO gene codes for a protein that regulates appetite. While it does not “make” a person obese (genes tend to predispose, not determine), it has been correlated with obesity. The catch? It seems to have only been expressed after about 1940, according to a study published just 2 days ago. The study (which can be found here) found that, after 1942, the FTO gene showed a strong correlation with increase BMI. Why, though, would a gene that has not changed suddenly become active?

The Environment

What did change in the 1940’s? Technology. WWII offered an incredible economic boost to the US that massively increased technological enterprise and was the main contributing factor the world superpower status that the US achieved in the 40’s. As technology increased, labor decreased. After all, the main purpose of technology is to make human life simpler. When human life becomes simpler, people become more sedentary. New technology also allowed for cheaper, higher calorie, over-processed food. This one will take a while to work out. The difference could be epigenetic alteration, novel environmental stimuli, or even another gene interacting with FTO. While more testing will be needed to show exactly what happened in the early 40’s that altered FTO expression, the fact that something did occur, likely stemming from environmental changes, still remains. Biological evolution doesn’t have to be the changing of DNA sequence; that is far too simplistic. Anytime phenotypic or genotype ratios change on a species-wide level, evolution is occurring. No population is in Hardy-Weinberg equilibrium, and no population ever will be. Human wills continue to evolve biologically. While cultural evolution has exceedingly outpaced biological evolution, giving the mirage that biological evolution has “stopped,” the truth is that culture can either augment or stagnate biological evolution, depending upon the situation. A cultural change to drinking more milk may augment lactase persistence (and in fact, it did), while a cultural propensity to live in climate controlled housing year-round may slow other aspects of biological evolution. Nature doesn’t necessarily control natural selection; more broadly, the environment (cultural or natural) mediates evolution.

So, why is evolution important in medicine? Sure, doctors need to understand things like microbial evolution and how it plays a role in infectious diseases, but what about human evolution? How can a knowledge of human evolution impact medicine?

Cultural evolution has rapidly and drastically altered the human environment, thus changing how the human species evolves. More importantly, our environments have changed so aggressively that our bodies cannot keep up. (Before I go on, I have to make something clear. I am not a proponent of the Paleo Diet; if you’d like to know why, check out this post.) This means our bodies are often best adapted to the environments of the past (though these vary drastically). This has given rise to what are sometimes referred to as “mismatch diseases.” The list is extensive, but includes maladies such as atherosclerosis, heart disease, type-2 diabetes, osteoporosis, cavities, asthma, certain cancers, flat feet, depression, fatty liver syndrome, plantar fasciitis, and irritable bowel syndrome, to name a few. Some of these may not be actual mismatch diseases, but many of them likely are. Furthermore, many of these illnesses feed off one another, creating a terrible feedback loop. 100 years ago you’d likely die from an infectious disease; today, most people in developed nations will die from heart disease, type-2 diabetes, or cancer.

These diseases don’t have to be essential baggage of modernity. Anthropologists and (and some intrepid human evolutionary biologists) study modern day hunter-gatherer societies in order to glean information about the nature of our species pre-Neolithic Revolution. It’s important to note that these are not perfect models (cultural and biological evolution has still occurred in these hunter-gatherer societies), but are the best available. Interestingly, modern day hunter-gatherers don’t suffer from many of these mismatch diseases (This effect can’t be explained by longevity; hunter-gatherers regularly live into their late 60’s and 70’s. Though unusual to many of us, their lives aren’t as brutish as they are often portrayed). Diseases such as type-2 diabetes, hypertension, heart disease, osteoporosis, breast cancer and liver disease are rare among the societies. What’s more, myopia (near-sightedness), asthma, cavities, lower back pain, crowded teeth, collapsed arches, plantar fasciitis, and many other modern ailments are exceedingly rare. So what’s different? The easy answer is their diet, lifestyle, and environment. The difficult answer involves elucidating the physiological importance of certain social norms and biochemical processes of differing diets. Some very exciting work is beginning to arise in this field, dubbed “evolutionary medicine.”

Modern medicine and medical research focuses largely on treating problems, i.e., drugs and procedures that alleviate symptoms after the disease has manifested. While the cause is noble, and indeed necessary, it’s not enough. The childish logic of medical research creates a cycle of sickness-treatment that, in 2012, totaled almost $3 trillion in healthcare costs. Furthermore, the sedentary and Epicurean lifestyle in which many Americans live willingly feeds this cycle; among the less privileged, necessity feeds this cycle through the inability to afford healthy food, limited access to health education, and a sociocultural feedback loop that breeds its own vicious cycle.

There will likely never be a drug that can cure cancer (of which there are thousands of variants that can even differ between individuals who have the same variant), heart disease, type-2 diabetes, or many of the other previously mentioned noninfectious diseases. The rationale is akin putting water in your car’s gas tank and hoping additives will make it work as efficiently as gasoline. The car was built to run off gasoline. Similarly, your body has evolved to not eat an excessive amount of salts, carbs, and sugars (of which the different types, particularly glucose and fructose, do not have the same biochemical effects during digestion), sit for extended periods of time, wear shoes (particularly those with arch support; a common misconception is that arch support is good for you when, in fact, it weakens the muscles of the arch, leading to ailments such as collapsed arches and plantar fasciitis), read for several hours at a time, chew overly processed food, or many of the other things that people in developed nations commonly do, often times see as a luxury.

Modern medicine needs a paradigm shift. Funding needs to support not only treatments, but also investigations into prevention. The medical cause of diabetes may be insulin resistance, but what causes insulin resistance and how can we prevent it? Sugar may cause cavities, but what can do to prevent this? Shoes, even comfy ones, may cause collapsed arches, but how do we prevent this? The immediate response may be that this sort of prevention cannot be attained without abandoning modern technology all together. However, this isn’t the case, and it’s not the argument I’m trying to make. Research should focus on a broad range of interacting variable, including diet, work environment, school environment, and other aspects evolutionarily novel environments. Only after research from this evolutionary perspective takes place can constructive conversations and beneficial environmental changes occur. We don’t have to abandon modern society to be healthy; we just need to better understand how our lifestyle affects our bodies. Items such as smoking and alcohol are already age limited and touted as dangerous to health. Is junk food, particularly soda, any different? We don’t put age regulations on cigarettes or alcohol to protect bystanders. Instead, these regulations protect children who cannot be relied upon to make proper choices in their naivety. Should soda be under these same constraints?

If medicine and medical research does not undergo this paradigmatic shift and incorporate an evolutionary perspective, the outcome does not bode well for us. Medical costs will continue to rise with little room for improvement and greater opportunities for socioeconomic factors to play into the quality of healthcare available. This ad hoc treatment approach to medicine is not sustainable, and is not the best we can do.

Multiplex Automated Genome Engineering: Changing the world with MAGE

Humans have evolved a most unique mastery of toolmaking through advanced technology. As an extension of our biological bodies, technology has loosened the grip of natural selection. This is particularly true in the field of biomedicine and genetic engineering. We have the ability to directly alter the blueprint of life for any purpose we wish. Beginning in the 1970’s with the creation of recombinant DNA and transgenic organisms, genetic engineering has offered scientists the ability to study genes on a level that may not have seemed possible at the time. The field has provided a wealth of knowledge as well as practical implications, such as knockout mice and the ability to produce near-endless amount of human insulin for diabetics.

As of 2009, multiplex automated genome engineering (MAGE) has ushered in a new branch of genetic engineering – genomic engineering. We are no longer restricted to altering single genes, but rather are able to alter entire genomes by manipulating several genes in parallel. This new ability, brought about by MAGE technology, allows for nearly endless applications that stretch well beyond medicine or industry; agriculture, evolutionary biology, and conservation biology will benefit tremendously as MAGE technology progresses. Genetic engineering advancements such as MAGE are poised to revolutionize entire fields of science, including synthetic biology, molecular biology, and genetics by offering faster, cheaper, and more powerful methods of genome engineering.

Homologous Recombination

Genetic engineering underwent a revolutionary change in the 1980’s, largely due to the pioneering work of Martin Evans, Mario Capecchi, and Oliver Smithies. Evans and Kauffman were the first to describe a method for extracting, isolating, and culturing mouse embryonic stem cells. This laid the foundation for gene targeting, a method that was independently discovered by both Oliver Smithies and Mario Capecchi. Mario Capecchi and his colleagues were the first to suggest mammalian cells had the machinery capable for homologous recombination with exogenous DNA. Smithies took this a step further, demonstrating targeted gene insertion using the β-globin gene. Ultimately, the combined work of Evans, Smithies, and Capecchi on homologous recombination earned them the Nobel Prize in Physiology or Medicine in 2007. The science of homologous recombination has allowed for many scientific discoveries, primarily through the creation of knockout mice.

Homologous recombination works under many of the same principles are chromosomal recombination in meiosis, wherein homologous genetic sequences are randomly exchanged. The difference lies in the fact that homologous recombination works with exogenous DNA and on a gene level rather than chromosomal level.

1

2

The method works by using a double stranded genetic construct with flanking regions that are homologous to the flanking regions of the gene of interest. This allows for the sequence in the middle, containing a positive selection marker and new gene, to be incorporated. The positive control should be something that can be selected for, such as resistance to a toxin or a color change. Outside of one of the flanking regions of the construct should lie a negative selection marker; the thymidine kinase gene is commonly used. If homologous recombination is too lenient, and the thymidine kinase gene is incorporated into the endogenous DNA, it can be detected and disposed of. This is to prevent too much genetic information from being exchanged.

Using this method, knockout mice can be created. A knockout mouse is a mouse that is lacking a functional gene, allowing for elucidation of the gene’s function. Embryonic stem cells are extracted from a mouse blastocyst and introduced to the gene construct via electroporation. The successfully genetically modified stem cells are selected using the positive and negative markers. These are isolated and cultured before being inserted back into mouse blastocysts. The mouse blastocysts can then be inserted into female mice, producing chimeric offspring. These offspring may be mated to wild-type mice. If the germ cells of the chimeric mouse were generated from the modified stem cells, then the offspring will be heterozygous for the modified gene and wild-type gene. These heterozygous mice can then be interbred, with a portion of the offspring being homozygous for the modified gene. This is the beginning of a mouse line with the chosen gene “knocked-out.”

3

Multiplex Automated Genome Engineering Process

The major drawback of the previously described method of “gene targeting” is the inability to multiplex. The process is not very efficient, and targeting more than one gene becomes problematic, limiting homologous recombination to single genes. In 2009, George Church and colleagues solved this issue with the creation of multiplex automated genome engineering (MAGE). MAGE technology uses hybridizing oligonucleotides to alter multiple genes in parallel. The machine may be thought of as an “evolution machine,” wherein favorable sequences are chosen at a higher frequency than less favorable sequences. The hybridization free energy is a predictor of allelic replacement efficiency. As cycles complete, sequences become more similar to the oligonucleotide sequence, increasing the chance that those sequences will be further altered by hybridization. Eventually, the majority of endogenous sequences will be completely replaced with the sequence of the oligonucleotide. This process only takes about 6-8 cycles.

4

After the E. coli cells are grown to the mid log phase, expression of the beta protein is induced. Cells are chilled and the media is drained. A solution containing the oligonucleotides is added, followed by electroporation. This step is particularly lethal, killing many of the cells. However, the cells are chosen based on positive markers (optional, but increases efficiency) and allowed to reach the mid-log phase again before repeating the process. Church and his colleagues have optimized the E. coli strain EcNR2 to work with MAGE. EcNR2 contains a plasmid with the λ phage genes exo, beta, and gam as well as being mismatch gene deficient. When expressed, the phage genes will help keep the oligonucleotide annealed to the lagging strand of the DNA during replication, while the mismatch gene deficiency prevents the cellular repair mechanisms from changing the oligonucleotide sequence once it is annealed. Using an improved technique called co-selection MAGE (CoS-MAGE), Church and colleagues created EcHW47, the successor to EcNR2. In CoS-MAGE, cells that exhibit naturally superior oligo-uptake are selected for before attempting to target the genes of interest.

MAGE technology is currently in the process of being refined, but shows incredible promise in practical applications. Some of the immediate applications include the ability to more easily and directly study molecular evolution and the creation of more efficient bacterial production of industrial chemicals and biologically relevant hormones. Once the technique has been optimized in plants and mammals, immediate applications could be realized in GMO production and creation of multi-knockout mice that will give scientists the ability to study gene-gene interactions on a level previously unattainable. A more optimistic and perhaps grandiose vision could see MAGE working towards ending genetic disorders (CRISPR technology, an equally incredible genomic editing technique, may beat MAGE there) and serving as a cornerstone technique in de-extinction. The ability to alter a genome in any fashion brings with it immense power. The possibilities for MAGE are boundless, unimaginable, and are sure to change genomic science.

For more information on Homologous recombination, see:

http://www.bio.davidson.edu/genomics/method/homolrecomb.html

For more information on MAGE, see:

Wang, H. H., Isaacs, F. J., Carr, P. A., Sun, Z. Z., Xu, G., Forest, C. R., & Church, G. M. (2009). Programming cells by multiplex genome engineering and accelerated evolution. Nature, 460(7257), 894-898.

Wang, H. H., Kim, H., Cong, L., Jeong, J., Bang, D., & Church, G. M. (2012). Genome-scale promoter engineering by coselection MAGE. Nature methods, 9(6), 591-593.

For more information on CRISPR (which I highly recommend; it’s fascinating), see:

https://www.addgene.org/CRISPR/guide/

A Case for the Coalescence of Science and the Humanities

To contemplate the nature of humanity, there must exist endeavors from both the sciences and humanities. Each branch of knowledge brings to the table its own unique perspectives, assumptions, and models of learning. The sciences teach us about the natural world and it’s functioning. From the microscopic investigations of DNA to the search for exoplanets, science has defined the latter part of the Anthropocene – the epoch in which the global ecosystems have been subjugated and forever changed by human activities. Science ushered technology into a dimension that was previously unimaginable, where there seems to be no end to the artificial extensions of our biological domains.

With its jumpstart from the 17th century Enlightenment, scientific inquiry and discovery has revolutionized our world. The Age of Enlightenment saw a rising of reason, skepticism, and individual thought from which there was no precedent. The Cambrian Explosion of scientific knowledge, the Enlightenment brought about scientific discoveries that rewrote the trajectory of human existence. Philosophes, freed from the dogmatic ideology of the past, drew up the blueprints of our future. However, as successful and revolutionary as the Enlightenment was, it proved unsuccessful at reaching the core of human spirit, unable to tap into the emotional side of human nature. In an attempt to fulfill the unsatiated desire to understand the core of humanity, the Romantic era was ushered in. The 19th century champions of creative arts filled the emotional gap left by scientific endeavors. Expressions of individuality, residues of the Enlightenment, flooded the arts. The importance of aesthetic value was stressed, and the human imagination was extended in all directions. Romantics attempted to divulge the secrets of the human experience, the continuum on which humans ride in the cosmos. A more focused and anthropocentric approach, Romanticism succeeded where the Enlightenment had failed, but failed where the Enlightenment had succeeded.

As science and the humanities grew increasingly complex, their existence seemed to be a fixed dichotomy, henceforth irreconcilable. Answering two fundamentally different types questions, the humanities and science are both essential to a holistic understanding of our existence in the larger cosmos. As our technoscientific advances increase at an astounding rate, our defining of the Anthropocene becomes ever more acute. Advances in science and technology drive our imposition on nature; our ability to repurpose the existing and to create anew are changing the landscape of Earth. To counteract the effects of science and technology on nature, we turn to… science and technology. Science shows us how to do things, however, it lacks in the ability to show us what we should do. This requires a taste of the humanities. The humanities represent the venture into and extension of our human continuum. They attempt to unveil and explain the idiosyncrasies of human thought, creativity, and overall existence. Much like the scientific endeavor, the humanities’ endeavor is a never-ending quest. There is always something new to discover that has the potential to shift our way of thinking or understanding.

As we penetrate deeper into the depths of nature, we must apply the knowledge and revelations from the humanities to our excursion. As we continue dominion over the Earth and extend our understanding of nature, we must give ourselves a course to follow. Because the humanities explore and explain our specialized place in the cosmos, they are in the best position to evaluate our intrusions upon nature. Questions of value cannot be answered by science. As prescient and imaginative as science is, it still follows the shadow of the humanities. Science fiction drives the frontiers of science. Our explorations into human nature and creativity are the precursors of scientific explorations. A coalescence of these two primary branches of learning is essential to our continued existence. Each serves as its own weight in the balance. To see the larger picture of our existence in the cosmos, we must turn to science. To understand our own existence and the intricacies from which it is fueled, the humanities are irreplaceable. To wisely advance in our existence, we must amalgamate the two into a functional framework.

Eschewing Scientific Curiosity in the US – A Slippery Slope.

NPR recently wrote a story titled, “When Scientists Give Up.” The story revolved around scientists that, in the middle of their career, decided to switch professions entirely due to issues with funding. Now, I am a bit biased when it comes to the importance of science, and I’ll be the first to admit that. However, I think it’s clear what role science has played, and must continue to play, in our society (unashamedly using this as a plug for a previous blog that I wrote concerning science in society, which can be found here). And, don’t get me wrong: there’s nothing wrong with a change of career, whether it’s due to poor job prospects or simply a change of interest. That being said, what on Earth is an individual who spent a minimum of 8 rigorous years at a well-respected school – gaining knowledge for a very particular career – doing switching careers at 40? On that same note, why is someone in whom the US has invested millions of dollars in grant money changing career paths? Clearly, there is something wrong with this picture.

Science doesn’t prove facts – it explains them. Science doesn’t prove evolution, science explains evolution; science doesn’t prove gravity, science explains gravity; science doesn’t prove that cells form the basis of life, science explains how cells form the basis of life. All of these things are already taken as facts (so, yes, evolution is a “fact” in the same sense gravity or cell theory are “facts.” It’s an observation that science attempts to explain in a systematic, reviewed manner). Now, if science never proves things, how long does it need to work on an explanation before it can be taken for granted? There’s no real answer to this question, though it does require a decent amount of time. The answer is more a function of how well it stands the test, rather than the how long. Gravitational theory has been standing for nearly 400 years, evolutionary theory, cell theory, and germ theory (that is, the theory that microorganisms can cause disease) for about 150 years. Does that mean that these theories have went un-amended? Of course not! That’s what science does: it pokes and prods at our ideas, refining them until they are able to stand the test – any test. As iron sharpen iron, so one scientist – or a large community of them – sharpens another.  Each of the previously mentioned theories were rather radical at their time, outside the common consensus and understanding of the time. Galileo’s ideas were so radical that he spent the last decade of his life under house arrest. All because he was trying to explain what he was seeing, and that explanation was regarded as too radical.

Money is a precious thing, not to be thrown around lightly (unless of course, it’s being blown on military-related projects, but that’s another story). Grant writing is tough, and the competition is incredibly fierce. As such, corporations that shell out grant money – hereafter referred to as “grantfathers,” a term I want credit for coining – are careful about to whom they choose to give. Unfortunately, it seems that more and more grantfathers are becoming conservative with their wagers. They’re spending their dollar playing the penny machines rather than the quarter-slots. While I understand the safe approach, it’s destroying the very essence and character of science. Yes, ideas must be rigorously tested and stand the test of time – even the boring ones. However, focusing only on this area of science, and ignoring the frontier-busting, trailblazing, imagination and passion driven areas of science is doing an injustice to the scientists, the field of study, and the country.

We didn’t get to where we are by playing it safe. Science – and by extension, technology – demands innovation. Innovation breeds errors. Errors, in the scientific community, breed precision. The current generation is afraid of failure. We are willing to stand up for a cause, maybe more so than most generations hitherto, but we are afraid to actually act upon the cause. This culturing of “skittishness” is driving science and technology into a plateau, shrinking the branches that emerge from the trunk of discovery. The innovation is there – the action is missing. But, who can blame them? You can only study what someone is willing to give you money for. In the modern field of science, you have to look out for numero uno, even if it flies in the face of everything you got into science for in the first place.

If government funding stays its conservative route, the future of the US as a leader in science and technology will grow dimmer and dimmer, overshadowed by more daring countries. Now, as a scientist, where will you go? Do you stay in the US, where funding is tight and sight is narrow, or across the pond, where funding, while still competitive, is more open-minded and nurturing of scientific curiosity? As the worlds’ greatest minds begin to reconvene outside of the US, our position as a global leader will diminish into something more second-rate. Once the scientists are gone, it will prove to be a difficult task getting them back. Students will seek degrees overseas, where the funding and mentors are to be found. The US has held this position for a long time, but is slowly slipping as king of the mountain. Once the avalanche starts, it will be difficult to reverse. If the way we currently handle pre-emptive tasks – such as fossil-fuel dependency or drug-resistant bacteria – says anything about how we will handle this issue, it may already be too late.

The Role of Science in Society

Introduction

Carl Sagan once stated, “… the consequences of scientific illiteracy are far more dangerous in our time than in any time that has come before.” This statement becomes truer every day, as scientific and technological innovations are occurring at an ever-increasing rate. Studies suggest that less than 30% of Americans are “scientifically literate,” meaning that over 70% of Americans would have trouble reading – and understanding – the science section of the New York Times. So, why is this important? After all, everyone has their strengths and weaknesses.

The problem with this view is that science is a driving force behind our sociocultural evolution. New ideas and new inventions are constantly redefining how we live our lives. As time goes on, science and technology will define most of life as we live it. Already, this is true. 100 years ago, people often lived day by day without electricity. Today, the most frightening thing most people could imagine would be a total loss of electricity. Imagine all of the things that simply wouldn’t work without it: phones, televisions, the Internet, lighting, heat and A/C, automobiles, and many parts of the manufacturing process for everyday items. We have built a society in the United States that is almost entirely dependent upon electricity. Personally, It’s difficult for me to imagine a world without electricity because everything I know is based on upon it. Life has become relentlessly complex and multifaceted. Most people have no idea how the world around them – that is, this semi-artificial world, or anthropogenic matrix – functions.

As time goes on, our day-to-day lives will become less and less “natural” and more and more artificial. This is not inherently bad. However, it does raise the standards for what we must understand about how the world, especially our anthropogenic matrix, works. Failing to keep a basic understanding of science and technology is destined to segregate the population, facilitating the rise of an “elite” few, resembling more of an oligarchy than a representative democracy. I’m not much of a conspiracy theorist, and I don’t mean to imply that a “New World Order” is going to secretly control our lives. I do, however, think that if nothing is done about our general ignorance of science, we will slip away from the democracy that we claim to love so dearly. How? How can ignorance of science and technology lead to the failure of democracy? After all, you can vote regardless of your scientific literacy. While it’s true that you can vote while being largely ignorant of how the world works, this is part of the problem. To be clear, I do not think that there should be any kind of scientific literacy test in order to vote. This would only serve as fuel for the ever-broadening gap between those who understand science and those who don’t. In a democracy, everyone should be able to vote. However, given the state of knowledge that we currently have and the increasingly complex world in which we find ourselves, uneducated voting has disastrous consequences.

A Little Politics

Politics is, in its most basic form, the practice of influencing a population. This is done by verbally persuading people to get behind an action that will be set in motion order to guide the population down a particular path of life. The United States is a representative democracy, which means officials are elected by the public to govern the public. The United States is not a simple representative democracy; many modifications are set in order to give the minority a voice. However, in light of these modifications, “majority rules” is still the rule of thumb. On its surface, a “majority rules” system seems ideal. Going with what most people want or believe is the best thing to do seems like a solid idea. I agree that this is typically a good philosophy – that is, as long as those voting are educated on the matter at hand.

The Modern Intersection of Science and Government

The base of everything in our lives is built from science; it holds together our infrastructure. When a politician makes a motion to change or regulate something, he or she is making a change that affects our anthropogenic matrix, and, consequently, the natural world in which our matrix operates through such acts as deforestation, ozone depletion, species extinction, etc. If a constituent does not have a basic understanding of how the world works, then how can that individual make a good decision with regards to electing a public official who will pass laws that affect the world? Moreover, ignorance of science and technology (not to mention poor reasoning and logical evaluation skills that tend to accompany science education) leads to a vote based largely on emotion and superficial similarity. If you know very little about a subject, you cannot make an educated decision regarding that subject. If not based on an educated understanding, something else must be the base upon which you make decisions. The next best choice would be decisions based on reason and logic. Unfortunately, a fostering of critical thinking is also aloof in many educational settings. Science acts as a major source of training by which people learn to reason and form logical conclusions. In turn, many – though not all – who base their decisions on logical reasoning are in the same group of people who base their decisions on knowledge of science.

If you don’t use a knowledge of science to aid in political decision-making, it’s likely that you are more swayed by charisma and emotional triggers. Those candidates who are more like you, or at least are ostensibly like you, are more likely to sway your opinion. After all, that’s what politics is all about – persuading people. If most of your constituents are not scientifically literate, then you as a politician will be less likely to use science as a persuasion tactic and more likely to use charisma and emotionally charged wording that resonates with many of your constituents. Though not a valiant method of persuasion, it is a smart one. Unfortunately, this only perpetuates the current epidemic of scientific illiteracy.

Why Public Knowledge of Science Matters

One major problem with scientific illiteracy is that politicians can make a poor decision, intentionally or unintentionally, with no one to call them out. Regulations or the lack thereof concerning issues such as climate change, medical research, and irresponsible use of resources must be made based on the science that is used to study and understand these matters. If a politician uses a non-scientific basis for creating laws (a basis fueled by a constituency who is scientifically illiterate and, perhaps, an ulterior motive such as monetary stock in the decision), then consequences are sure to ensue. The effects can be immediate, such as lack of funding for education or medical research, or delayed, as with the consequences surrounding anthropogenic climate change.

Politics aside, understanding science and technology is imperative to functioning in our ever increasingly technological world. 100,000 years ago, one had to be a skillful hunter; 10,000 years ago, one needed to be adept in agriculture; today, we must stay informed on, at the very least, the basics of science. Expertise is not required for social and political progress, but awareness is essential.

Ebola – Not The Threat To the US That You Think It Is

About 900 people have died of Ebola in the last 6 months. Should you be worried? If you don’t want to read the post, and are just looking for an answer: NO!

If you’re one of the people who is saying, “But they are bringing 2 Ebola patients to the U.S., and a man in New York is suspected of having the disease!!” then please, for the sake of everyone around you, keep reading.

Let’s take a look at Ebola first. What exactly is it? Ebola Virus Disease (in humans) is caused by one three species of Filoviridae viruses: Zaire, Sudan, and Bundibugyo. There are two other Ebola species, but they do not affect humans. Of the three species mentioned, Ebola Zaire is the nastiest, with anywhere from a 50-90% fatality rate – closer to 50% with supportive care and closer to 90% with no supportive care. Ebola is a hemorrhagic fever disease, characterized by high fever, shock, multiple organ failure, and subcutaneous bleeding. Typically, the patient first shows flu-like symptoms before progressing to the more characteristic bleeding symptoms. If the virus itself doesn’t kill you, oftentimes your own immune system will spiral out of control and send you into shock, and, most often, death.

Now that the bad part is over, let’s look at why it’s not as scary globally as you might think.

Strengths:                                                        Weaknesses:

High mortality rate                                      Only spread through body fluids

Ambiguous early symptoms                   Lacks a ubiquitous vector

3 – 21 day incubation period                  Physically impairs the victim

No treatment or cure

Although the number of strengths outweighs the number of weaknesses, the quality of the weaknesses far outweighs the strengths. Without being airborne or transmitted by some ubiquitous vector, it is unlikely that any disease will ever cause a pandemic (meaning, global effects). In addition to this, Ebola impairs its victims. Even the flu-like symptoms are enough to sway you from much human contact. The scariest part about Ebola is the incubation period. Someone may not show symptoms for up to 3 weeks after being exposed to the virus. While this, in concert with the ambiguous early symptoms, might keep the flame flickering, it isn’t enough to start a wildfire. Still not convinced? Let’s put the outbreak into perspective:

We are currently experiencing the largest and most deadly Ebola outbreak in recorded history. The death toll is almost to 900 in 6 months – less than the number of people who die every 6 months from hippopotamus attacks. The Spanish Flu of 1918, undoubtedly the worst pandemic in the history of mankind, infected about 30% of the people in the world and killed anywhere from 3-5% of the global population in a single year. If you see a grave marker with a death date anytime in 1918, chances are greater than not that the individual died from Spanish Flu. This astounding death toll was accomplished WITHOUT the advent of modern travel, i.e., no airplanes. The current Ebola outbreak has killed 900 people, or about 0.0000001% (1/10 millionth of a percent)of the world population. 900 out of about 7.3 billion people worldwide. Oh yea, the other thing? Ebola isn’t worldwide. It’s in Western Africa.

Slide1

The only time Ebola has ever really been outside of Africa is… well… never. The closest we’ve come to that is recently bringing two patients to the US for treatment. 2 patients that will likely not even be exposed to US air or land for the next 2 weeks, as they were flown in on a plane with a quarantine chamber and are now isolated in a hospital ward in one of the top hospitals in the US.

I’m not trying to downplay the seriousness of Ebola from the safety of my suburban coffee shop. Yes, it would be scary if I were living in Sierra Leone. Not so much because I have a high chance of contracting Ebola, but because I don’t know where it might be lurking. And, if I did contract it, I’d be more miserable and frightened in the next two weeks than I’d ever been in my life. I would only be relieved of this misery by multiple organ failure and bleeding out of eyes until I died, or the less likely chance that I survive. Ebola is a terrible, nasty disease, but it’s not a global threat nor is it a U.S. threat.

Anthropology, Biology, and Other Musings