Tag Archives: Genetics

A Tale of Cookies and Milk: How We Adapted to Consuming Grains and Dairy

Humans are curious creatures. We like to poke and prod at new things to see what will happen. This curiosity is part of the reason we are successful. Though it can sometimes lead to disastrous outcomes, curiosity can be the reason not only for cultural inventions, but biological changes. This is especially true for our diet, which has changed radically in the past 10 – 20 thousand years. Two of the biggest changes have been our ability to efficiently digest grains and dairy. The agricultural revolution led to a lot of changes in human diet, including grain and dairy. Humans were experimenting with many new types of food. I’m sure the first individual to started eating grain was met with a warmer  reception than the one who suggested we start drinking cow and goat milk. At any rate, both ventures wound up changing our biology and culture. Just think: without amylase and lactase, Santa would be having something other than cookies and milk.

The Short Story of Amylase

In order to digest grains or any other starchy food, an organism needs an enzyme called amylase. Amylase hydrolyzes starch, eventually getting to the glucose molecules contained within the food. Though amylase is not unique to humans, there are some unique aspects about human amylase. In humans there is a positive correlation between the number of copies of the gene responsible for production of amylase, AMY1, that exist in a genome and the expression of amylase in the saliva. Interestingly, the average human contains about 7 times as many copies of AMY1 as chimpanzees, suggesting evidence for amylase selection after our split from the common ancestor with chimpanzees. The small differences between DNA in human AMY1 genes suggest a fairly recent selection event. Moreover, populations with high-starch diets had more AMY1 copies than populations with low-starch diets, further supporting a more recent selection as well as fairly rapid evolution. When it comes to diet, it seems natural selection can act fairly quickly.

The process of carbohydrate digestion begins in the mouth with an enzyme called ptyalin, also known as α-amylase. Ptyalin hydrolyzes the glycosidic bonds within starch molecules, breaking them down into the disaccharide sugar known as maltose. In the walls of the stomach, specialized cells called parietal cells secret hydrogen and chloride ions, creating hydrochloric acid. Amylase, which works at an optimum pH of about 7, cannot function in the highly acidic environment of the stomach.

The second part of starch digestion is initiated in the small intestine by an enzyme called pancreatic amylase. Though pancreatic amylase and salivary amylase are coded by two different DNA segments, they are side by side in the genome. It has been suggested that an endogenous retrovirus inserted DNA in-between the two copies of amylase that existed in our ancestors’ genome; this interruption in the open reading frame of the gene caused a mutation that promoted amylase production in the saliva from one of the gene copies that originally coded for pancreatic amylase. This mutation would have had a clear advantage, allowing for greater breakdown of starchy foods. Further evidence for the positive selection of salivary amylase production can be seen in its independent convergent evolution in mice and humans.

So the story for amylase is fairly short. Our ancestors began with two pancreatic amylase genes, which split to create one pancreatic and one salivary amylase gene. Over time, copy-number variations in genes occurred and were either selected for or against. Random gene duplication in conjunction with varying diets among human populations has resulted in the amylase locus being one of the most variable copy-number loci in the entire human genome.

The Somewhat Longer Story of Lactase

            The Neolithic (agricultural) revolution brought about some of the biggest cultural changes that our species has ever seen. Small groups of hunter-gatherers began to morph into large societies of agricultural-based farmers that existed in tandem with a group of people who lived a nomadic herding lifestyle. Nomadic herders could travel between these newly formed cities, trading meat, milk, or animals for agricultural products such as recently domesticated plants and grains. This substantial change in lifestyle caused a rapid overhaul in many aspects of human biology, including immunity, body size, and prevalence of certain digestive enzymes.

Lactase is the enzyme that breaks down the disaccharide sugar lactose, found in dairy products, into the monosaccharides glucose and galactose. Lactase is an essential enzyme because it allows infants to break down the lactose in the mother’s milk. However, there is a down-regulation of the lactase gene during childhood for a significant portion of world’s population. Curiously, the portion of the world’s population that does not experience down-regulation of the lactase gene are mostly of European descent. There is an interesting correlation between geographic location and percent of the population with lactase persistence. The further North you go in Europe, the more lactase persistence you find. This probably has to do with the fact that the colder climate of Europe, especially northern Europe, left fewer options for food consumption. The ability to digest and reap the benefits of lactose into adulthood could have acted as a major factor in surviving to reproductive age, thus increasing the prevalence of lactase persistence in those cultures.

Milk has a decent amount of calories and fat to keep energy reserves up, allowing people to survive harsh winters in Northern Europe. In addition, it provides nutrients such as calcium, protein, and vitamins B12 and D. Today in the Western world we see the high caloric and fat content of milk as a threat of weight gain. However, people living in 7000 B.C. would have seen this as a gold mine for survival. As essential as the calories and fats were to Northern European Neolithic people, the vitamin D content of milk may have been equally as important. In order for the body to synthesize vitamin D, it needs UVB rays from sunlight. This is an issue at northern latitudes, where it’s colder and there’s less sunlight than many other areas on Earth. Moreover, the amount of UVB light that can be absorbed is dependent upon angle at which the Sun’s rays strike the Earth. So even during a clear sunny day in the winter, people living in northern latitudes may not be absorbing UVB rays.

vitamind

One way to combat the low levels of UVB rays is to have fair skin. UVB rays that strike the skin will cause the synthesis of cholecalciferol (Vitamin D3) from 7-dehydrocholesterol that is already present in the skin, eventually leading to the production of a usable form of vitamin D. Specifically, 7-dehydrocholesterol is found predominately in the two innermost layers of the epidermis. This can be an issue for UVB absorbance since, melanin, which is the pigment responsible for darker skin, absorbs UVB at the same wavelength as 7-dehydrocholesterol. Indeed, it turns out that fair-skinned people (who tend to live in colder and more northern climates) are more efficient at producing vitamin D than darker skinned people.

Vitamin D is really an underappreciated nutrient. It is essential for absorption of calcium, which is nearly ubiquitous in its usage throughout the body, from brain functioning to muscle contraction. Recent research has illuminated other uses for vitamin D, including regulation of genes associated with autoimmune diseases, cancers, and infection. One study in Germany found that participants (average age of 62) with low vitamin D levels are twice as likely to die, particularly of cardiovascular problems, in the following 8 years than those with the highest vitamin D levels.

Though it isn’t too important to us today, lactase persistence might have saved the populations of Neolithic people in Northern Europe. Its dose of fat and calories helped bump up energy stores while the calcium and vitamin D found in whole milk reaped significant nutritional benefits. Though there are still many questions surrounding the evolution of lactase persistence in sub-populations of humans, the selection of this phenotype is quite clear. Those with lactase persistence would have had supplemental nutrition that might have helped them survive the Northern European winters.

 

 

When DNA Isn’t Enough: Methylation, Forensics, and Twins

DNA evidence is often considered a “home run” in forensics. If you find readable DNA at the crime scene, and it matches a suspect, a correct conviction is almost assured. A DNA sample can often point to a single individual with ridiculous specificity – often 1 in a quadrillion or greater. But, what happens when someone else shares your DNA?

Monozygotic, or “identical” twins differ from dizygotic, or “fraternal” twins in that they come from the same zygote, hence, “mono”zygotic. In other words, identical twins come from 1 fertilized egg, while fraternal come from two. This means that Identical twins will share the same DNA, while fraternal twins will share as much DNA as any other sibling pair. There are, of course, many iterations of monozygosity depending when during development the split actually takes place. This nuance has led scientists in Germany to a possible solution to the issue of identical twin DNA.

During development, only a few cells are present. These cells begin to differentiate into the different tissue types that they will become. As these cells divide rapidly to produce the all of the daughter cells, mutations can occur in the DNA. If the mutation occurs earlier, it will be present in a larger ratio of the daughter cells, and will be more easily detectable during the twin’s lifetime. This differentiation of tissues also means that, the earlier the twins split, the less mutations they will have in common (and, thus, the more differences you can detect in their DNA). It has been suggested recently that, a handful of single nucleotide mutations, or “SNPs” can be found between twins. However, these SNPs aren’t so easy to find in a sea of 3 billion other nucleotides. To find these few differences and find them reliably, the entire genome of both twins must be sequenced several times over. In the case of the German scientists, their experiment results in 94-fold coverage, meaning they covered each of the 3 billion nucleotides 94 times. This must be done to ensure accuracy. At 3 billion nucleotides, a 99.9% accuracy will still result in 30 million errors. If anything, this shows how incredibly accurate our cellular machinery is.

At any rate, the scientists tested their new method on a set of twins, and it worked. In the end, twelve SNPs were identified between the twin brothers. Typically, one experiment is not considered to hold much weight in science, but this particular experiment is backed by strongly reinforced genetic theory, and the results were exactly what we would expect.

So, case solved, right? Well, maybe not. It turns out that this method comes with a hefty price tag – over $100,000. This is far too much to be practical in forensic case work, especially when you consider that about 1 person in 167 is an identical twin. Of course, this price will go down as DNA sequencing prices continue to plummet in light of newer, better technology. Still, it will be many years before anything like this will be affordable (a typical forensic DNA test costs in the neighborhood of $400-$1000). Furthermore, the instruments used in this method (Next generation sequencing), though typical in research science, have not been approved for use in court. That in and of itself can be a challenging obstacle to overcome, regardless of costs.

Perhaps in a few decades these issues will be resolved. Perhaps not. Either way, it might be a good idea to have a plan in the meantime. This is (hopefully) where my master’s thesis comes in.

DNA is composed of four nucleotides, commonly noted as “A T C and G.” Throughout life, a methyl group – a carbon and three hydrogens – attaches to some of the C’s in your genome. This is known as DNA methylation, which is a big component of the larger phenomenon known as epigenetics. As it turns out, these methyl groups attach randomly to the C’s, though some evidence suggests that environmental conditions may play some part in this. In any case, the attaching of methyl groups to C’s is different among individuals – even identical twins. In fact, studies have shown that newborns already exhibit DNA methylation discordance. Presumably, these differences would become more pronounced as time goes on. Not many studies have looked at this, but the ones that have also show evidence of greater discordance with age.

There is a potential issue with studying DNA methylation: it doesn’t occur uniformly among tissues. In other words, a blood sample and a skin sample from the same individual will show different patterns of methylation. Moreover, cells within the same tissue can show different methylation patterns. Though not insurmountable, these issues make methylation analysis a tricky subject.

To tackle the first issue of tissue discordance, one could simply match the type of DNA you take from the suspect with the type of DNA you have at the scene. The second issue of intra-tissue discordance is a bit trickier to tackle. For starters, we don’t know too terribly much about how DNA methylation works. Ostensibly, if methylation differences occurred early in development, then they would show the same pattern of proliferation as the SNPs that occur early in development. This means that the same DNA methylation pattern would be present in all of the daughter cells, and show up easily in a DNA sample from that tissue.

Another possible solution would be to take a statistical approach. This would involve looking at the methylation patterns several times and coming up with an “average” methylation. For example, let’s say there are 10 C’s susceptible to methylation in a particular DNA sequence. If I run 10 samples from a DNA swab, I might find the number of methylated C’s to be: 3, 4, 5 ,3, 2, 4, 5, 3, 4, 4. If you average these, you get 3.7 out of 10 possible methylated C’s. Thus, you might say that this DNA sequence shows 37% methylation. If you do the same thing for the other twin and come up with 5.5 out of 10 possible methylated C’s, you could say that the other twin’s sequence shows 55% methylation. Ideally, these number would be relatively reproducible, especially as you increase the sample number and/or number of potentially methylated C’s per sequence.

Compared to the SNP method, my project is less definitive. However, good protocols would still make the method definitive enough. Once you narrow the suspects down to two twins via normal DNA testing, you have two possible outcomes: a match between one twin and the sample at the crime scene, or inconclusive. At this point, you just need to differentiate between two people, not 7 billion. Thus, the required statistical power is much, much lower. The big difference between my method and the SNP method is the price. Whereas the SNP method costs between $100,000 and $160,000, my method could be done in-house for less than $5000. Furthermore, my method is performed using the same instruments as traditional DNA testing, meaning that the new instrumentation does not need to be validated for use in court.

So, while it will take some work, and my project is more of a proof of concept study, the use of DNA methylation in forensics is generating a lot of attention. One of the issues with methylation in my study, i.e., different patterns in different tissues, has been a major benefit to a different use of DNA methylation – tissue identification. The idea here being that if you can identify consistent methylation patterns among a tissue type, you can use those patterns to identify the tissue. Another aspect that is relevant to my project, the increase in methylation with age, has been vetted as a possible investigative tool. If you can identify level of methylation that are consistent with different age groups, you can potentially “age” a suspect just by their DNA methylation. Studies on methylation aging are few and far between, but preliminary results are promising, suggesting that age-based methylation analysis can get within +/- 5 years of an individual’s actual age.

As we learn more about DNA methylation, it will become more useful. This is true not only for forensics, but also medicine, since methylation plays an important role in turning genes “on” or “off.” This is particularly true in cancer, where abnormal DNA methylation seems to occur. But, before we try to cure cancer with methylation, perhaps we can perform the smaller task of telling two twins apart from each other.

*Also published in part at http://forensicoutreach.com/library/when-dna-isnt-enough-methylation-forensics-and-twins-part-1/

and

http://forensicoutreach.com/library/when-dna-isnt-enough-methylation-forensics-and-twins-part-2/

Multiplex Automated Genome Engineering: Changing the world with MAGE

Humans have evolved a most unique mastery of toolmaking through advanced technology. As an extension of our biological bodies, technology has loosened the grip of natural selection. This is particularly true in the field of biomedicine and genetic engineering. We have the ability to directly alter the blueprint of life for any purpose we wish. Beginning in the 1970’s with the creation of recombinant DNA and transgenic organisms, genetic engineering has offered scientists the ability to study genes on a level that may not have seemed possible at the time. The field has provided a wealth of knowledge as well as practical implications, such as knockout mice and the ability to produce near-endless amount of human insulin for diabetics.

As of 2009, multiplex automated genome engineering (MAGE) has ushered in a new branch of genetic engineering – genomic engineering. We are no longer restricted to altering single genes, but rather are able to alter entire genomes by manipulating several genes in parallel. This new ability, brought about by MAGE technology, allows for nearly endless applications that stretch well beyond medicine or industry; agriculture, evolutionary biology, and conservation biology will benefit tremendously as MAGE technology progresses. Genetic engineering advancements such as MAGE are poised to revolutionize entire fields of science, including synthetic biology, molecular biology, and genetics by offering faster, cheaper, and more powerful methods of genome engineering.

Homologous Recombination

Genetic engineering underwent a revolutionary change in the 1980’s, largely due to the pioneering work of Martin Evans, Mario Capecchi, and Oliver Smithies. Evans and Kauffman were the first to describe a method for extracting, isolating, and culturing mouse embryonic stem cells. This laid the foundation for gene targeting, a method that was independently discovered by both Oliver Smithies and Mario Capecchi. Mario Capecchi and his colleagues were the first to suggest mammalian cells had the machinery capable for homologous recombination with exogenous DNA. Smithies took this a step further, demonstrating targeted gene insertion using the β-globin gene. Ultimately, the combined work of Evans, Smithies, and Capecchi on homologous recombination earned them the Nobel Prize in Physiology or Medicine in 2007. The science of homologous recombination has allowed for many scientific discoveries, primarily through the creation of knockout mice.

Homologous recombination works under many of the same principles are chromosomal recombination in meiosis, wherein homologous genetic sequences are randomly exchanged. The difference lies in the fact that homologous recombination works with exogenous DNA and on a gene level rather than chromosomal level.

1

2

The method works by using a double stranded genetic construct with flanking regions that are homologous to the flanking regions of the gene of interest. This allows for the sequence in the middle, containing a positive selection marker and new gene, to be incorporated. The positive control should be something that can be selected for, such as resistance to a toxin or a color change. Outside of one of the flanking regions of the construct should lie a negative selection marker; the thymidine kinase gene is commonly used. If homologous recombination is too lenient, and the thymidine kinase gene is incorporated into the endogenous DNA, it can be detected and disposed of. This is to prevent too much genetic information from being exchanged.

Using this method, knockout mice can be created. A knockout mouse is a mouse that is lacking a functional gene, allowing for elucidation of the gene’s function. Embryonic stem cells are extracted from a mouse blastocyst and introduced to the gene construct via electroporation. The successfully genetically modified stem cells are selected using the positive and negative markers. These are isolated and cultured before being inserted back into mouse blastocysts. The mouse blastocysts can then be inserted into female mice, producing chimeric offspring. These offspring may be mated to wild-type mice. If the germ cells of the chimeric mouse were generated from the modified stem cells, then the offspring will be heterozygous for the modified gene and wild-type gene. These heterozygous mice can then be interbred, with a portion of the offspring being homozygous for the modified gene. This is the beginning of a mouse line with the chosen gene “knocked-out.”

3

Multiplex Automated Genome Engineering Process

The major drawback of the previously described method of “gene targeting” is the inability to multiplex. The process is not very efficient, and targeting more than one gene becomes problematic, limiting homologous recombination to single genes. In 2009, George Church and colleagues solved this issue with the creation of multiplex automated genome engineering (MAGE). MAGE technology uses hybridizing oligonucleotides to alter multiple genes in parallel. The machine may be thought of as an “evolution machine,” wherein favorable sequences are chosen at a higher frequency than less favorable sequences. The hybridization free energy is a predictor of allelic replacement efficiency. As cycles complete, sequences become more similar to the oligonucleotide sequence, increasing the chance that those sequences will be further altered by hybridization. Eventually, the majority of endogenous sequences will be completely replaced with the sequence of the oligonucleotide. This process only takes about 6-8 cycles.

4

After the E. coli cells are grown to the mid log phase, expression of the beta protein is induced. Cells are chilled and the media is drained. A solution containing the oligonucleotides is added, followed by electroporation. This step is particularly lethal, killing many of the cells. However, the cells are chosen based on positive markers (optional, but increases efficiency) and allowed to reach the mid-log phase again before repeating the process. Church and his colleagues have optimized the E. coli strain EcNR2 to work with MAGE. EcNR2 contains a plasmid with the λ phage genes exo, beta, and gam as well as being mismatch gene deficient. When expressed, the phage genes will help keep the oligonucleotide annealed to the lagging strand of the DNA during replication, while the mismatch gene deficiency prevents the cellular repair mechanisms from changing the oligonucleotide sequence once it is annealed. Using an improved technique called co-selection MAGE (CoS-MAGE), Church and colleagues created EcHW47, the successor to EcNR2. In CoS-MAGE, cells that exhibit naturally superior oligo-uptake are selected for before attempting to target the genes of interest.

MAGE technology is currently in the process of being refined, but shows incredible promise in practical applications. Some of the immediate applications include the ability to more easily and directly study molecular evolution and the creation of more efficient bacterial production of industrial chemicals and biologically relevant hormones. Once the technique has been optimized in plants and mammals, immediate applications could be realized in GMO production and creation of multi-knockout mice that will give scientists the ability to study gene-gene interactions on a level previously unattainable. A more optimistic and perhaps grandiose vision could see MAGE working towards ending genetic disorders (CRISPR technology, an equally incredible genomic editing technique, may beat MAGE there) and serving as a cornerstone technique in de-extinction. The ability to alter a genome in any fashion brings with it immense power. The possibilities for MAGE are boundless, unimaginable, and are sure to change genomic science.

For more information on Homologous recombination, see:

http://www.bio.davidson.edu/genomics/method/homolrecomb.html

For more information on MAGE, see:

Wang, H. H., Isaacs, F. J., Carr, P. A., Sun, Z. Z., Xu, G., Forest, C. R., & Church, G. M. (2009). Programming cells by multiplex genome engineering and accelerated evolution. Nature, 460(7257), 894-898.

Wang, H. H., Kim, H., Cong, L., Jeong, J., Bang, D., & Church, G. M. (2012). Genome-scale promoter engineering by coselection MAGE. Nature methods, 9(6), 591-593.

For more information on CRISPR (which I highly recommend; it’s fascinating), see:

https://www.addgene.org/CRISPR/guide/

De-Extinction Is On Its Way

“What’s the point of bringing back some pigeons that have been gone for a century, or some hairy elephants that disappeared four millennia ago? Well, what’s the point of protecting unhairy elephants in Africa or over-specialized pandas in China or dangerous polar bears in the Arctic, or any of the endangered species we spend so much money and angst on preserving?”

– Stewart Brand

It’s difficult to argue with that logic. In 2012, the US spent over $3 billion on conservation efforts.

I don’t know about you, but I always dreamt of a real-life Jurassic Park. Unfortunately, it doesn’t seem like dinosaurs will ever have the chance to roam the Earth again. Quite frankly, with new research showing that most dinosaurs probably had feathers, I’m not sure it would even live up to what our minds are conditioned to believe dinosaurs to look like anyway. They’d be giant, carnivorous chickens, more or less. But what about a mammoth or a thylacine?

While the DNA that once inhabited a dinosaur bone is long gone, victim to over 65 million years of radiation, hydrolysis, and other forms of degradation, DNA can be found in some more recent specimens. But how would it work? How could we possibly bring back – that is, De-Extinct – an organism. Well, actually, it’s already been done.

The Sad Saga of the Pyrenean Ibex 

The last surviving Pyrenean Ibex died in 2000. Of all the ways for a species to go out, this one was found dead underneath a fallen tree. It seems as though Mother Nature was just out to get them. So, naturally, humans did what humans do best – try to one-up Mother Nature. Pre-emptively thinking in 1999, biologists cryogenically froze a tissue sample of Celia, the last surviving member of her species. When Celia died, scientists were ready to bring her back.

The technique used is called somatic cell nuclear transfer. You can find a short video of it happening in real time here. Essentially, an oocyte – egg cell – from a domestic goat was de-nucleated and the nucleus from one of Celia’s somatic (body) cells was inserted into the empty oocyte. The resulting cells were then transferred into a domestic goat surrogate. Unfortunately, the process proved technically difficult. 285 embryos were reconstructed. Of those, 54 were transferred to 12 ibex and ibex-goat hybrids. Only two survived the two months of gestation before they too died. One clone was finally birthed in 2009 – the very first de-extinction. Unfortunately, the clone had a lung defect, and died of a collapsed lung only 7 minutes after birth. One of the problems was likely the fact that Celia was already 13 years old – old age for a goat – when the tissue sample was taken. This means that her telomeres, the caps on chromosomes that protect the supercoiled DNA, were already very short. As DNA replicates, the enzymes cannot make it to the very end of the DNA (where the telomeres are located), so the telomeres are truncated. They act as a sort of buffering system to keep the actual genes from being damaged (on a side note, your age is essentially a function of your telomere length).

The procedure seemed to doom any idea of de-extinction. After all, if we can’t even bring back a species that has been dead for under a decade, how can we ever hope to bring back a 30,000 year-old wooly mammoth? Fortunately, scientists are incredibly stubborn, and didn’t just drop the idea all at once. With advances in technology, science fiction often becomes reality. In the field of de-extinction, the limiting factor is DNA extraction and sequencing technology, which seems to be growing faster than Moore’s Law predicts it should.

A New Method

So, is there another way – a better way – to clone an animal than by somatic cell nuclear transfer? Maybe, and it’s called induced-pluripotent stem cell (IPS)-Derived sperm and egg cloning. The idea behind this is to splice your target species DNA (say, from a mammoth) into a surrogate stem cell genome (say, from an Asian elephant). Because these are stem cells (or pluripotent cells), they can become anything. So you coax the newly modified stem cells into becoming germ cells – those that will make the testes and ovaries. You then insert the germ cells into the embryos of a male and female surrogate (Asian elephants, in our example). Now you have a male and female Asian elephant embryo with mammoth precursory germ cells. You grow up the two surrogates, and they will exhibit target species (mammoth) gonads (testes and ovaries). So, you then mate the two and out comes a “full-blood” mammoth (click here and skip ahead to about the 10 minute mark to see this example with falcons and chickens. I recommend watching the entire TED talk. It’s my favorite one, and will explain a lot about De-Extinction).

You will see a second De-Extinction in your lifetime, and hopefully more to follow. Expect it from – Passenger Pigeons, Gastric Brooding Frogs, and, hopefully, Mammoths.

Maybe We Can… But Should We?

This, to me, is one of the biggest hurdles. You have to convince people that something, at least of this caliber, is a good idea. I began the post with a quote from Stewart Brand that I think idealizes the argument for De-Extinction. Hank Greely, a Stanford Law School professor specializing in biomedical technology ethics gives an excellent TED talk on this (found here). To outline his talk, here are the 10 things we must consider, 5 risks and 5 benefits:

  • Animal Welfare
  • Health
  • Environment
  • Political Concerns
  • Morality
  • Scientific Knowledge
  • Technological Progress
  • Environment
  • Justice
  • Wonder

I will flesh these out quickly, but won’t spoil the TED talk.

Animal Welfare

  • Cloning isn’t a very “safe” process. It can take hundreds of embryos, and often the few who survive don’t last long. We need to ensure the welfare of the animals that we try to bring back.

Health

  • What if we bring back an animal and it happens to be a great vector for a terrible disease? Oftentimes the beginning of an epidemic is a new, better vector.

Environment

  • If we bring back a species, is it going to cause ecological problems?

Political Concerns

  • If we make De-Extinction a plausible conservation effort, will it undermine current efforts to preserve what we have? Why try to save them if we can just bring them back? Similarly, is it worth it financially?

Morality

  • To be short, we are playing God. We are doing something that, presumably, has never really happened in almost 4 billion years of life. We are redrawing the branches of the tree of life. It’s not something to be taken lightly

Scientific Knowledge

  • We could learn things previously unknowable about genetics, evolution, and biology.

Technological Progress

  • De-Extinction is the edge of science. It is pushing technology to its outer bounds, making technological development increase faster than it normally would. This provides technological offshoots for many medical procedures.

Environment

  • Bringing back a species can actually be good for the community. See, for example, the effect of wolf reintroduction at Yellowstone.

Justice

  • Are reparations due? Its arguable whether or not we caused megafaunal extinction – mammoths, wooly rhinoceros, cave bear, etc. – but there’s no doubt that some species, such as the passenger pigeon, went extinct due to human activity, namely hunting. And, sadly, we continue this destructive path, which is stripping the Earth of some of its most precious large mammals – tigers, elephants, and rhinoceroses, just to name a few.

Wonder

  • My favorite. This is what science does. It inspires us. It awes us. It brings our imagination outside of our minds and places in front of us. Wonder isn’t all that impractical either. Wonder is what drives scientific knowledge further. It’s a self-perpetuating field that is snowballing into the ever-decreasing realm of science fiction.

The “can we” of De-Extinction is coming to a close. It’s time to start discussing the “should we” aspect. The technology will be here very soon, but are we ready?

The Paleo Diet – Brilliantly Simple, or Simply Wrong?

Introduction to the Paleo

 According to thepaleodiet.com, “the Paleo Diet, the world’s healthiest diet, is based upon the fundamental concept that the optimal diet is the one to which we are genetically adapted.” Who can disagree with that? After all, it does make sense that the best diet would be one that, according to our genetics, our body can utilize most efficiently. However, is this what the Paleo Diet actually offers?

The Paleo Diet claims to offer “modern foods that mimic the food groups of our pre-agricultural, hunter-gatherer ancestors.” First we have to look at what the Paleo Diet means by our “ancestors.” Being a “paleo” diet, it is referring to our ancestors in the Paleolithic era, which extends from about 2.5 million years ago to about 10,000 years ago, just after the end of the last ice age and around the dawn of the Neolithic – or agricultural – revolution. 2.5 million years is a pretty broad range to select a diet from, but perhaps not so broad on an evolutionary timescale.

One issue that arises when studying the diets of ancient hominids is the fact that archaeological sites aren’t all too common past 10,000 years ago. The reason probably lies in the fact that prior to the Neolithic revolution, people were hunter-gatherers. They didn’t really have permanent settlements. Hunter-gatherers travel to where the food – presumably that which can be hunted (migratory animals such as elk, bison, caribou, etc. depending upon geographic location) and gathered (berries, nuts, shellfish, and so on) – is. This would vary by the season and even by the century as animals permanently migrated to new locations or became over-hunted in their current location. However, when mankind developed agriculture about 10,000 years ago, people began to establish permanent settlements. These settlements, which were fueled by the domestication of plants and animals and thus liberation from hunting and gathering, provide a rich source for archaeological artifacts. It’s difficult to find the few material bits and pieces of a nomadic lifestyle. When people settle for hundreds or even thousands of years in a location, artifacts build up, and the chances of finding something 10 millennia later are much greater.

How do we know about their diet? Archaeological evidence

So, how do we know what the hunter-gatherers ate? One way is to look through the archaeological sites that we do have. Animal bones are often signs that the inhabitants ate meat. Furthermore, we might find tools that could have been used for butchering along with cut marks on the bones that imply that the animal was butchered. Along with this, we can track morphological changes over time. Changes in the size and structure of certain bones, such as the mandible and cranium, might indicate a change in diet. A diet heavier in meat could require a larger mandible and would imply an increase in calories that would be necessary to support a larger brain in the larger cranium.

Osteological analysis, though, is qualitative at best. It’s important to remember that an archaeological site is merely a snapshot in time. For example, a site that was abandoned in the winter (maybe to move somewhere warmer, a death of the inhabitants, or something completely different) might show a heavy use of meat due to the fact that not many plants grow in the colder months. With so few sites, there isn’t very strong evidence one way or the other about diets. Small sample sizes can be incredibly biased.

Stable Isotopes

Another way is to study ancient diets is by using stable isotope analysis. If you remember from chemistry class, isotopes are two elements with the same number of protons but a differing number of neutrons. Because proton (atomic) number defines elemental properties, the two elements are actually the same element, but with slightly different weights. For example, about 99% of the carbon in the atmosphere is C12 – carbon with an atomic mass number (combined number of protons and neutrons) of 12. This is the most stable form of carbon, and thus the most abundant. Carbon has two other isotopes that are relevant to scientific studies, C13 and C14. Though there are many more isotopes, they are found in minute amounts and are so unstable that they decay rather quickly.

You have probably heard of carbon dating, which measures the relative abundance of C14 in an organic artifact and derives an approximate date based on known rates of decay for C14. This works based on the fact that there is a certain ratio of C12 to C14 in the atmosphere, which is taken up by organisms. After the organism dies, C14 begins to decay due to its heavier weight. While this is based on the assumption that C14 to C12 ratios were the same in the past, it can often be cross-verified with other forms of dating, such as stratification, phylogenetic dating, other forms of radiometric dating, and sometimes even early writings (for example, the date derived from carbon dating an item purportedly from some event can be compared to a written, dated historical document describing the event).

Stable isotope analysis works, as the name implies, by measuring a stable, rather than radioactive isotope. Because C13 is not heavy enough to decay (C12 and C13 are the only stable isotopes of carbon, and C14 is the most stable radioactive isotope), it will remain in the bones and teeth in the same C12:C13 ratio as when the organism was alive. Great! Although C12 and C13 are not discriminated in our bodies, some plants distinguish between C12 and C13, ever so slightly. Ribulose-1,5-biphosphate carboxylase/oxygenase – commonly known as RuBisCO – is an enzyme that, in most plants, binds to the CO2 entering the stoma. Rubisco happens to have a slight affinity for C12, meaning the plant – and everything that eats the plant – has a disproportionate amount of C12 to C13. These plants are known as “C3” pathway plants.

In arid climates, where water is even most precious, plants had to adapt. A problem arose due to the fact that water escapes from the stoma when it opens to have rubisco capture CO2. Therefore, some plants, known as C4 pathway plants, evolved to use another enzyme, PEP-carboxylase, to bind CO2. PEP-carboxylase binds much more strongly to CO2 than rubisco, and doesn’t present a preference for either C12 or C13.

Carbon isotopes are used in conjunction with other elemental isotopes, such as nitrogen, to assess relative ratios of plant to meat in diets. This is all based on small differences between heterotrophs and autotrophs, carnivores, herbivores, and omnivores. For example, organisms higher in the food chain tend to have more N15 than organisms lower in the food chain. It is important to understand the isotopic variation of the ecosystem, however, they can vary, especially when environmental manipulation (such as cooking) comes into play. Ultimately, stable isotope analysis has a modest amount of discriminatory power, but is not comprehensive. It utilizes quantitation to make a qualitative claim, and does so on a limited number of samples.

Problems with the logic of a Paleo Diet

Which “paleo” should we eat like? 10,000 B.C.E. Inuit people? 200,000 year old Mitochondrial Eve? 1 million year old Homo erectus? Clearly there were times, and species, of hominids that ate more meat than others. An Inuit living in north Canada survived largely off of seal fat. However, Homo erectus probably lived more off of fruits and nuts. Humans survived and came to dominate the planet due largely to their adaptability, including our omnivorous diet. Our ability to adapt to mostly nuts or mostly blubber has granted us freedom to roam from the heart of Africa to the frozen lakes of Canada. Paleolithic hunter-gatherers simply ate what was available to them.

Many Paleo dieters cite articles discussing health disparities that arose when agriculture entered the picture. While this is true, it’s not necessarily because we stopped eating a “paleo diet.” More likely, health problems arose because we stopped eating such a wide variety of foods. Many ancient peoples went from elk, bison, nuts, and berries to what we could domesticate. Eventually, our domesticated crops and animals grew in variety and things leveled out a little more. This was likely not a rapid transition. Domestication may have started out as simply a way to supplement hunting and gathering before the boom of the Neolithic Revolution. Regardless of your diet, it is important to eat a variety of food in order to encompass all nutritional ingredients. Many people in Westernized cultures today eat a much more monotonous diet than they should.

Are we genetically identical to our “Paleo” brothers and sisters?

One of the main arguments of the Paleo Diet is that our genome has changed little since the end of the Paleolithic period, meaning our bodies are still best adapted to the diet of that time. This argument is a bit short-sighted. To claim that our genome has not adapted to our Neolithic lifestyle is simply incorrect. It is true that our genome evolution lags far behind our cultural evolution, and is often overshadowed by it. However, there do exist some key differences in our genomes from those of a Paleolithic hominid. The two most well known adaptations are the amylase and lactase mutations. Amylase is an enzyme that allows for digestion of starch from grain. As the Neolithic Revolution kicked into gear, those with an extra copy of the amylase gene better metabolized all of the new grain they could grow. This extra gene places amylase in the saliva, helping break down the starch at the beginning of digestion rather than beginning halfway through in the gut.

The second mutation is a regulatory mutation. People are born with a gene that regulates the production of lactase, an enzyme that breaks down the biologically unusable dairy sugar lactose into the biologically usable sugars galactose and glucose. Before animal husbandry practices of the Neolithic Revolution, the lactase gene would be transcriptionally inactive, or “turned off” in most people around the age of 5-7. After this age, the child no longer breast fed, and really had no need for lactase. However, once people began raising dairy animals, such as goats and cattle, dairy products such as milk and yogurt became an important staple food. This seems to have caused positive selection for the genetic mutation that allowed the lactase gene to remain “on” throughout life. Those with the lactase and amylase mutations could better exploit dairy and grain products than those without the mutations. So, while our genomes are not radically different, they are indeed different, and have adapted to some of the Neolithic diet changes.

Microbiomes

Although our genome is relatively similar to our ancestors, our microbiome certainly isn’t. The microbiome is the summation of microorganisms that inhabit us. This might not seem like a big deal, so let me put it in perspective. If we were to take the entire amount of DNA in your body, including that of the microorganisms, human DNA would comprise only about 10%. The other 90%? That would be the microbiome. You are 90% microorganism. With the recent completion of the human microbiome project, expect to see some incredible discoveries about the differences between ourselves and our Paleo ancestors in the near future.

So how do we study the Paleo microbiome? One way is through ancient DNA. Unfortunately (or fortunately, for researchers today), there were no Paleo dentists around, nor were there any Paleo toothbrushes. When people ate, plaque built up and calcified on their teeth. This calcified plaque is called dental calculus, and it preserves the DNA of the microorganisms that made up the plaque along with some of the DNA from the actual food. From this, using Next Generation Sequencing techniques, we can learn more about the kinds of food and the microorganisms that were present in the bodies of our ancestors. By comparing what we find to oral microbiomes today, we can have a better understanding of what Paleo people ate. Also, microfossils can be preserved in the dental calculus, allowing for a visual confirmation of food in the plaque. Again, these are qualitative measures that are inhibited by sample size. But, these are the best methods we have and they are producing some excellent research.

Is the food still the same?

People freak out about GMOs. The truth is basically everything we eat – meats and plants alike – are genetically modified. Over thousands of years we have artificially selected plants and animals for particular traits. As our genome has changed since Paleolithic times, plant and animal genomes have radically changed, largely due to human manipulation. So, even if you eat according to the Paleo Diet, you are eating the modern-Paleo Diet, not the Paleo-Paleo Diet. So, really, you aren’t even eating like you think the ancestors ate. Our modern plants are “human inventions,” as Dr. Christina Warinner – a leading Dental calculus expert at the University of Oklahoma – puts it.

Ultimately, the Paleo Diet, as it is marketed, isn’t really a Paleo Diet at all. There’s no harm, and definitely some benefit, in cutting refined sugars and overly processed meats out of your diet. However, eating modern versions of nuts, fruits, veggies, and more meat isn’t going to make you any more like a Paleo-man or Paleo-woman than if you just eat a normal, balanced diet. If anything, skipping out on legumes, dairy, and multi-grain wheat, which are prohibited in the Paleo Diet, could cause a lapse of certain nutrients. Technological and agricultural advances have produced some amazing foods that our Paleo ancestors could have only dreamt about. If you really want to be Paleo, then take advantage of the advances in food science. It’s what our ancestors would have done.

*A form of this is also published at http://anthronow.com/wp-content/uploads/2016/04/AnthroZine_1601.pdf