Tag Archives: Biology

Medicine, Technology, and the Ever-Changing Human Person

Though we often take for granted that humans are persons, they are not exempt from questions surrounding personhood. Indeed, what it means to be a person is largely an unsettled argument, even though we often speak of “people” and “persons.” Just as it’s important to ask if other beings might ever be persons, it is […]

via Medicine, Technology, and the Ever-Changing Human Person — Savage Minds

Of Primates and Persons — Savage Minds

Savage Minds welcomes guest blogger Coltan Scrivner for the month of January. Coltan will be writing a series of posts on personhood from different disciplinary perspectives. When I moved to Chicago for graduate school, one of the first things I did was go to the Lincoln Park Zoo. Just like with other zoos I’ve been […]

via Of Primates and Persons — Savage Minds

A Tale of Cookies and Milk: How We Adapted to Consuming Grains and Dairy

Humans are curious creatures. We like to poke and prod at new things to see what will happen. This curiosity is part of the reason we are successful. Though it can sometimes lead to disastrous outcomes, curiosity can be the reason not only for cultural inventions, but biological changes. This is especially true for our diet, which has changed radically in the past 10 – 20 thousand years. Two of the biggest changes have been our ability to efficiently digest grains and dairy. The agricultural revolution led to a lot of changes in human diet, including grain and dairy. Humans were experimenting with many new types of food. I’m sure the first individual to started eating grain was met with a warmer  reception than the one who suggested we start drinking cow and goat milk. At any rate, both ventures wound up changing our biology and culture. Just think: without amylase and lactase, Santa would be having something other than cookies and milk.

The Short Story of Amylase

In order to digest grains or any other starchy food, an organism needs an enzyme called amylase. Amylase hydrolyzes starch, eventually getting to the glucose molecules contained within the food. Though amylase is not unique to humans, there are some unique aspects about human amylase. In humans there is a positive correlation between the number of copies of the gene responsible for production of amylase, AMY1, that exist in a genome and the expression of amylase in the saliva. Interestingly, the average human contains about 7 times as many copies of AMY1 as chimpanzees, suggesting evidence for amylase selection after our split from the common ancestor with chimpanzees. The small differences between DNA in human AMY1 genes suggest a fairly recent selection event. Moreover, populations with high-starch diets had more AMY1 copies than populations with low-starch diets, further supporting a more recent selection as well as fairly rapid evolution. When it comes to diet, it seems natural selection can act fairly quickly.

The process of carbohydrate digestion begins in the mouth with an enzyme called ptyalin, also known as α-amylase. Ptyalin hydrolyzes the glycosidic bonds within starch molecules, breaking them down into the disaccharide sugar known as maltose. In the walls of the stomach, specialized cells called parietal cells secret hydrogen and chloride ions, creating hydrochloric acid. Amylase, which works at an optimum pH of about 7, cannot function in the highly acidic environment of the stomach.

The second part of starch digestion is initiated in the small intestine by an enzyme called pancreatic amylase. Though pancreatic amylase and salivary amylase are coded by two different DNA segments, they are side by side in the genome. It has been suggested that an endogenous retrovirus inserted DNA in-between the two copies of amylase that existed in our ancestors’ genome; this interruption in the open reading frame of the gene caused a mutation that promoted amylase production in the saliva from one of the gene copies that originally coded for pancreatic amylase. This mutation would have had a clear advantage, allowing for greater breakdown of starchy foods. Further evidence for the positive selection of salivary amylase production can be seen in its independent convergent evolution in mice and humans.

So the story for amylase is fairly short. Our ancestors began with two pancreatic amylase genes, which split to create one pancreatic and one salivary amylase gene. Over time, copy-number variations in genes occurred and were either selected for or against. Random gene duplication in conjunction with varying diets among human populations has resulted in the amylase locus being one of the most variable copy-number loci in the entire human genome.

The Somewhat Longer Story of Lactase

            The Neolithic (agricultural) revolution brought about some of the biggest cultural changes that our species has ever seen. Small groups of hunter-gatherers began to morph into large societies of agricultural-based farmers that existed in tandem with a group of people who lived a nomadic herding lifestyle. Nomadic herders could travel between these newly formed cities, trading meat, milk, or animals for agricultural products such as recently domesticated plants and grains. This substantial change in lifestyle caused a rapid overhaul in many aspects of human biology, including immunity, body size, and prevalence of certain digestive enzymes.

Lactase is the enzyme that breaks down the disaccharide sugar lactose, found in dairy products, into the monosaccharides glucose and galactose. Lactase is an essential enzyme because it allows infants to break down the lactose in the mother’s milk. However, there is a down-regulation of the lactase gene during childhood for a significant portion of world’s population. Curiously, the portion of the world’s population that does not experience down-regulation of the lactase gene are mostly of European descent. There is an interesting correlation between geographic location and percent of the population with lactase persistence. The further North you go in Europe, the more lactase persistence you find. This probably has to do with the fact that the colder climate of Europe, especially northern Europe, left fewer options for food consumption. The ability to digest and reap the benefits of lactose into adulthood could have acted as a major factor in surviving to reproductive age, thus increasing the prevalence of lactase persistence in those cultures.

Milk has a decent amount of calories and fat to keep energy reserves up, allowing people to survive harsh winters in Northern Europe. In addition, it provides nutrients such as calcium, protein, and vitamins B12 and D. Today in the Western world we see the high caloric and fat content of milk as a threat of weight gain. However, people living in 7000 B.C. would have seen this as a gold mine for survival. As essential as the calories and fats were to Northern European Neolithic people, the vitamin D content of milk may have been equally as important. In order for the body to synthesize vitamin D, it needs UVB rays from sunlight. This is an issue at northern latitudes, where it’s colder and there’s less sunlight than many other areas on Earth. Moreover, the amount of UVB light that can be absorbed is dependent upon angle at which the Sun’s rays strike the Earth. So even during a clear sunny day in the winter, people living in northern latitudes may not be absorbing UVB rays.

vitamind

One way to combat the low levels of UVB rays is to have fair skin. UVB rays that strike the skin will cause the synthesis of cholecalciferol (Vitamin D3) from 7-dehydrocholesterol that is already present in the skin, eventually leading to the production of a usable form of vitamin D. Specifically, 7-dehydrocholesterol is found predominately in the two innermost layers of the epidermis. This can be an issue for UVB absorbance since, melanin, which is the pigment responsible for darker skin, absorbs UVB at the same wavelength as 7-dehydrocholesterol. Indeed, it turns out that fair-skinned people (who tend to live in colder and more northern climates) are more efficient at producing vitamin D than darker skinned people.

Vitamin D is really an underappreciated nutrient. It is essential for absorption of calcium, which is nearly ubiquitous in its usage throughout the body, from brain functioning to muscle contraction. Recent research has illuminated other uses for vitamin D, including regulation of genes associated with autoimmune diseases, cancers, and infection. One study in Germany found that participants (average age of 62) with low vitamin D levels are twice as likely to die, particularly of cardiovascular problems, in the following 8 years than those with the highest vitamin D levels.

Though it isn’t too important to us today, lactase persistence might have saved the populations of Neolithic people in Northern Europe. Its dose of fat and calories helped bump up energy stores while the calcium and vitamin D found in whole milk reaped significant nutritional benefits. Though there are still many questions surrounding the evolution of lactase persistence in sub-populations of humans, the selection of this phenotype is quite clear. Those with lactase persistence would have had supplemental nutrition that might have helped them survive the Northern European winters.

 

 

Does Stress Really Cause Stomach Ulcers?

Stomach ulcers, also known as peptic ulcers, have an interesting history in medicine. It was originally believed that stress somehow caused them, and doctors didn’t have much advice to offer patients who were suffering from a peptic ulcer. In the 1980’s, however, Barry Marshall and Robin Warren ran some experiments suggesting that a Helicobactor pylori bacterial infection caused peptic ulcers. As with many new discoveries, the news was met with fierce skepticism in the scientific and medical communities. To quell the skeptics, Barry Marshall actually swallowed some H. pylori to prove his hypothesis. This remarkable feat provoked some scientists to begin experimenting with the bacterium, and they found that Marshall was correct. H. pylori causes ulcers by weakening the mucosal lining, allowing stomach acid to come into contact with the stomach lining. With numerous repeated studies finding consistent results, it seemed like a closed case. This discovery was such a big deal that Marshall and Warren even won a Nobel Prize for their work.

However, as with most things in human biology, a wrench was eventually thrown into the equation. It was later discovered that a significant portion of the population carries H. pylori in their stomach, but don’t exhibit ulcers. How could this be? Though all the details are still not fully elucidated, it seems there is a back-and-forth battle going on in the stomachs of people with H. pylori that is relatively harmless and asymptomatic – that is, until they become stressed.

Let’s assume you are one of the majority of people on earth who has H. pylori living in your stomach. The bacterium probably colonized your stomach long ago, but you haven’t noticed anything out of the ordinary. Suddenly, there is a tragic psychological stressor in your life: you lose your job, fail an important final, your significant other ends the relationship – pick your poison. After a few weeks, your life begins to improve and you are feeling better about yourself, except for the excruciating stomach pains you are experiencing, especially after eating. You decide to see a doctor and are prescribed an antibiotic to fight off an H. pylori infection. What happened?

Because a psychological stressor can cause a physiological stress response, your sympathetic nervous system kicks into gear at the onset of the stressor. When the body is eliciting a stress response, cortisol is released in large amounts. One of the properties of cortisol is anti-inflammation. This anti-inflammatory property works by inhibiting the synthesis of a group of compounds known as prostaglandins. More specifically, cortisol prevents the synthesis of arachidonic acid, a precursor to prostaglandin. But, what does this have to do with ulcers?

As it turns out, a particular prostaglandin known as PGE2 is responsible for regulation of both stomach acid secretion and mucous secretion. As with many compounds in the body, PGE2 will have varying effects depending upon the cellular receptor to which it binds. The body is remarkably conservative. Often the same molecule can be used for a wide range of effects depending upon the receptor to which it will bind. You can think of the molecule as a skeleton key and the receptors as a bunch of old doors. The key can open any of the doors, but there can be a very different outcome depending on which door is opened. If PGE2 binds EP3 receptors, acid secretion is inhibited; if it binds EP4 receptors, acid secretion is stimulated and mucous secretion is stimulated. This mechanism makes sense, as an increase in stomach acid would warrant extra mucous to protect the stomach lining. Following this logic, if stomach acid secretion is down (PGE2 binding EP3), the body is going to conserve a little energy by making cuts on mucosal production.

An analysis of this information reveals a few key points. First, it has been shown that cortisol is inversely correlated with stomach acid secretion. This means that PGE2 is binding to the EP3 receptors. So, more cortisol -> less PGE2 -> PGE2 binds EP3. This suggests that PGE2 has a higher affinity for EP3 receptors than it does for EP4 receptors, meaning that it will bind EP3 receptors until most of them are bound before it begins to bind EP4 receptors. So, when you stress, cortisol concentration rises. When cortisol concentration rises, prostaglandin production decreases. Low concentrations of prostaglandin mean PGE2 will bind EP3 preferentially over EP4. This results in a slowing down of stomach acid secretion, which in turn lowers mucosal secretion. The figure below shows a flow chart of events, starting with chronic stress and ending in a peptic ulcer.

Picture1

This flow chart illustrates the cascade of events leading to an ulcer. In essence, H. pylori becomes an opportunistic pathogen. It takes advantage of the lower levels of mucous, which acts as a barrier between the stomach contents and the sensitive stomach lining.

 

Taking this information into account, the initial story begins to make more sense. After a few weeks of the blues, you find a new job, ace the test, and get the girl. Things are looking up. Because things are getting back to normal, the parasympathetic nervous system begins to take on its normal hours of operation and the sympathetic nervous system finally winds down. When this happens, your cortisol levels go back to basal levels, meaning that prostaglandin production is on the rise once again. PGE2 is being synthesized in larger amounts, and it begins binding EP4 receptors, turning on acid secretion and mucosal secretion. But there is a problem.

Over the past few weeks your acid secretion has been down and the stomach mucosal lining has thinned, and H. pylori has been proliferating at an increasing rate. With your normal defenses down, H. pylori has had the upper hand in the battle and has virtually wiped out the remainder of your mucosal lining and infected several cells in the lining of the stomach. As the parasympathetic system continues to stimulate digestion, the stomach acid overwhelms the under-mucoused stomach and begins to, quite literally, eat through the lining, resulting in an ulcer.

So, stress doesn’t “cause” the ulcer, but it weakens the mucosal lining, affording H. pylori an opportunity to finish clearing out the rest of the mucous and cause an infection.

 

When DNA Isn’t Enough: Methylation, Forensics, and Twins

DNA evidence is often considered a “home run” in forensics. If you find readable DNA at the crime scene, and it matches a suspect, a correct conviction is almost assured. A DNA sample can often point to a single individual with ridiculous specificity – often 1 in a quadrillion or greater. But, what happens when someone else shares your DNA?

Monozygotic, or “identical” twins differ from dizygotic, or “fraternal” twins in that they come from the same zygote, hence, “mono”zygotic. In other words, identical twins come from 1 fertilized egg, while fraternal come from two. This means that Identical twins will share the same DNA, while fraternal twins will share as much DNA as any other sibling pair. There are, of course, many iterations of monozygosity depending when during development the split actually takes place. This nuance has led scientists in Germany to a possible solution to the issue of identical twin DNA.

During development, only a few cells are present. These cells begin to differentiate into the different tissue types that they will become. As these cells divide rapidly to produce the all of the daughter cells, mutations can occur in the DNA. If the mutation occurs earlier, it will be present in a larger ratio of the daughter cells, and will be more easily detectable during the twin’s lifetime. This differentiation of tissues also means that, the earlier the twins split, the less mutations they will have in common (and, thus, the more differences you can detect in their DNA). It has been suggested recently that, a handful of single nucleotide mutations, or “SNPs” can be found between twins. However, these SNPs aren’t so easy to find in a sea of 3 billion other nucleotides. To find these few differences and find them reliably, the entire genome of both twins must be sequenced several times over. In the case of the German scientists, their experiment results in 94-fold coverage, meaning they covered each of the 3 billion nucleotides 94 times. This must be done to ensure accuracy. At 3 billion nucleotides, a 99.9% accuracy will still result in 30 million errors. If anything, this shows how incredibly accurate our cellular machinery is.

At any rate, the scientists tested their new method on a set of twins, and it worked. In the end, twelve SNPs were identified between the twin brothers. Typically, one experiment is not considered to hold much weight in science, but this particular experiment is backed by strongly reinforced genetic theory, and the results were exactly what we would expect.

So, case solved, right? Well, maybe not. It turns out that this method comes with a hefty price tag – over $100,000. This is far too much to be practical in forensic case work, especially when you consider that about 1 person in 167 is an identical twin. Of course, this price will go down as DNA sequencing prices continue to plummet in light of newer, better technology. Still, it will be many years before anything like this will be affordable (a typical forensic DNA test costs in the neighborhood of $400-$1000). Furthermore, the instruments used in this method (Next generation sequencing), though typical in research science, have not been approved for use in court. That in and of itself can be a challenging obstacle to overcome, regardless of costs.

Perhaps in a few decades these issues will be resolved. Perhaps not. Either way, it might be a good idea to have a plan in the meantime. This is (hopefully) where my master’s thesis comes in.

DNA is composed of four nucleotides, commonly noted as “A T C and G.” Throughout life, a methyl group – a carbon and three hydrogens – attaches to some of the C’s in your genome. This is known as DNA methylation, which is a big component of the larger phenomenon known as epigenetics. As it turns out, these methyl groups attach randomly to the C’s, though some evidence suggests that environmental conditions may play some part in this. In any case, the attaching of methyl groups to C’s is different among individuals – even identical twins. In fact, studies have shown that newborns already exhibit DNA methylation discordance. Presumably, these differences would become more pronounced as time goes on. Not many studies have looked at this, but the ones that have also show evidence of greater discordance with age.

There is a potential issue with studying DNA methylation: it doesn’t occur uniformly among tissues. In other words, a blood sample and a skin sample from the same individual will show different patterns of methylation. Moreover, cells within the same tissue can show different methylation patterns. Though not insurmountable, these issues make methylation analysis a tricky subject.

To tackle the first issue of tissue discordance, one could simply match the type of DNA you take from the suspect with the type of DNA you have at the scene. The second issue of intra-tissue discordance is a bit trickier to tackle. For starters, we don’t know too terribly much about how DNA methylation works. Ostensibly, if methylation differences occurred early in development, then they would show the same pattern of proliferation as the SNPs that occur early in development. This means that the same DNA methylation pattern would be present in all of the daughter cells, and show up easily in a DNA sample from that tissue.

Another possible solution would be to take a statistical approach. This would involve looking at the methylation patterns several times and coming up with an “average” methylation. For example, let’s say there are 10 C’s susceptible to methylation in a particular DNA sequence. If I run 10 samples from a DNA swab, I might find the number of methylated C’s to be: 3, 4, 5 ,3, 2, 4, 5, 3, 4, 4. If you average these, you get 3.7 out of 10 possible methylated C’s. Thus, you might say that this DNA sequence shows 37% methylation. If you do the same thing for the other twin and come up with 5.5 out of 10 possible methylated C’s, you could say that the other twin’s sequence shows 55% methylation. Ideally, these number would be relatively reproducible, especially as you increase the sample number and/or number of potentially methylated C’s per sequence.

Compared to the SNP method, my project is less definitive. However, good protocols would still make the method definitive enough. Once you narrow the suspects down to two twins via normal DNA testing, you have two possible outcomes: a match between one twin and the sample at the crime scene, or inconclusive. At this point, you just need to differentiate between two people, not 7 billion. Thus, the required statistical power is much, much lower. The big difference between my method and the SNP method is the price. Whereas the SNP method costs between $100,000 and $160,000, my method could be done in-house for less than $5000. Furthermore, my method is performed using the same instruments as traditional DNA testing, meaning that the new instrumentation does not need to be validated for use in court.

So, while it will take some work, and my project is more of a proof of concept study, the use of DNA methylation in forensics is generating a lot of attention. One of the issues with methylation in my study, i.e., different patterns in different tissues, has been a major benefit to a different use of DNA methylation – tissue identification. The idea here being that if you can identify consistent methylation patterns among a tissue type, you can use those patterns to identify the tissue. Another aspect that is relevant to my project, the increase in methylation with age, has been vetted as a possible investigative tool. If you can identify level of methylation that are consistent with different age groups, you can potentially “age” a suspect just by their DNA methylation. Studies on methylation aging are few and far between, but preliminary results are promising, suggesting that age-based methylation analysis can get within +/- 5 years of an individual’s actual age.

As we learn more about DNA methylation, it will become more useful. This is true not only for forensics, but also medicine, since methylation plays an important role in turning genes “on” or “off.” This is particularly true in cancer, where abnormal DNA methylation seems to occur. But, before we try to cure cancer with methylation, perhaps we can perform the smaller task of telling two twins apart from each other.

*Also published in part at http://forensicoutreach.com/library/when-dna-isnt-enough-methylation-forensics-and-twins-part-1/

and

http://forensicoutreach.com/library/when-dna-isnt-enough-methylation-forensics-and-twins-part-2/

An Evolutionary Explanation For Why You Wear Glasses

Empirically testing health-related hypotheses formulated through an evolutionary lens can prove to be difficult. The environment and the human experience are radically different from the first 6 million years of human evolution. Living on the edge of human existence and the top end of the techno-scientific scale, we are far removed from the environment to which many of our genes are hypothesized to be properly suited. Fortunately, the human race is a diverse group of individuals who have dispersed across the globe and have acclimated to a variety of circumstances. Accordingly, a few hunter-gatherer societies remain in parts of Africa. Though neither their genes nor their cultures are identical to original hunter-gatherers, they do retain the closest genetic and sociocultural similarity to human ancestors in the modern world. This is not to say that they are “less evolved” than other human societies. This notion is elementary and indicative of evolutionary ignorance. They are very well suited for their habitat, both genetically and culturally. Fortunately, those of us who are less suited for our environments, both genetically and culturally (i.e., everyone else, particularly in the US), can glean incredible insights about the functioning our own bodies and to what dietary and daily circumstances our physiology is best suited.

I recently wrote a primer on evolutionary medicine (which can be found here), which might be beneficial to read before getting into the specifics. This post will focus on myopia, or “near-sightedness,” the visual condition where objects at a distance are out of focus. Myopia affects about 15% of Africans, a third of Americans and Europeans, and over 75% of Asians – a curious bias that I’ll address later in the article. Fortunately (sort-of), myopia is easy to treat with glasses or contacts, and can even be cured to some extent with Laser-Assisted in situ Keratomileusis, commonly known as LASIK. Myopia occurs when the eye is too long, causing the focal point of light to occur prematurely, resulting in a blurry image. As a result, corrective lenses refract the light before it hits the cornea, essentially “overshooting” the refraction. For example, myopic corrective lenses will be thicker on the sides and thinner in the middle, causing the light to spread out slightly more before it hits the cornea, ultimately moving the focal point further back in the eyeball. With LASIK, a high frequency laser is used to vaporize (note: no heat is used. The vaporization is due to the light wavelength) tissue on the center of the cornea, thus reshaping the cornea so that light will be correctly refracted.

63345-004-1DB996D5

In order to focus, the eye depends on ciliary muscles that are attached to the lens. When focusing on something far away, as would often be the case outdoors, the ciliary muscles contract, stretching the lens to a flattened shape. When focusing on something up close, such as a book, television, computer, or phone, the muscles relax, allowing the lens to become more concave. Think of a camera lens: to focus on something far away, you use a longer lens or zoom in. Doing this moves the focal point of distant objects further back, allowing them to be in focus. To take up-close shots you use a macro lens, which is a very short, rounded lens that moves the focal point for near objects closer to the lens. This is how the eye works. Myopia is what happens when your zoom function is broken. Evolution and an analysis of our current sociocultural context might be able to tell us why this happens.

2022820-accomodation

I’m a student, and spend a lot of my time looking at a book, a laptop, or a phone. I love to get outside when I can, but, ultimately, most of my time is spent looking at things up-close. That means that the ciliary muscles in my eye – the zoom muscles – spend most of their time relaxed. Just like any other muscle that goes unused, the ciliary muscle will likely begin to atrophy and become weaker (as far as I’m aware, no quantitative studies have been performed on ciliary muscle size or mitochondrial count, probably because this would be difficult or impossible to do on a living person. Perhaps future studies can examine the ciliary muscles of recently deceased individuals and compare individuals who suffered from myopia with individuals who had normal vision). Over time, particularly if it occurs throughout critical stages of development during childhood, the muscles may become to weak to contract and properly pull the lens flat, thus preventing you from being able to focus on distant objects. Of course, this begs the question of whether or not the muscle be strengthened. I don’t know, and I’m not sure that I am willing to find out by using myself as a guinea pig. Unfortunately, that makes me part of the problem of “dysevolution,” as coined by Harvard paleoanthropologist and human evolutionary biologist, Daniel Lieberman. Dysevolution refers to the circle of treating diseases without trying to change or fix the cause. Our technology and scientific understanding has advanced so rapidly in the past 100 years that we can fix things such as myopia with ease. Often this cycle is perpetuated by comfort. Why change what the way I do things when I can just buy contacts or glasses? My previous post mentions several other possible mismatch diseases, and Lieberman’s book, “The Story of the Human Body,” goes into detail about many of them. For many of them – if not most – we simply ignore the possible cures and instead opt for a more “comfortable” and easy treatment. However, this cycle is sure to grow and intensify as time goes on.

Evolutionary medicine is sometimes difficult to empirically test. However, as mentioned above, modern day hunter-gatherer societies can offer incredible insight and points of comparison for how sociocultural differences may affect our “mismatch diseases.” Studies of this kind are unfortunately few and far between (possibly because research funding also focuses on treatments). However, studies with hunter-gatherer societies have shown that very few members suffer from myopia (as well as many other non-infectious ailments, such as type-2 diabetes, heart disease, osteoporosis, and even cavities). The thought is that they are exposed to a variety of visual stimuli and their visual environment is constantly changing. This “exercises” their ciliary muscles and keeps them strong. Experiments have also shown that animals that are deprived of visual stimuli will grow elongated eyeballs. Similarly, people who spend more time indoors, particularly with studying, as is common in many Asian cultures, exhibit much higher instances of myopia whereas those who spend some time outdoors, as is more common in many African cultures, tend to have a lower rate of myopia. Our eyes did not evolve to see things 2 feet from our face all day long. They evolved to keep up alive from the plethora of visual stimuli in nature and to help us search for food: 2 things that many people, particularly children in developed countries, no longer need to do.

The solution isn’t to give up studying and electronics. It’s much more simple than that. Nearly everyone uses books and electronics, so why doesn’t everyone have myopia? One possibility is genetics, though that doesn’t seem like a plausible explanation. Rates of myopia have only skyrocketed in the last century, and any latent mutation for poor vision would have most certainly been selected against in our ancestors. The likely “cure” for myopia is balance. Spend time outside, especially as a child. The data from lab experiments as well as social statistics seem to point in this direction. If we continue to ignore the cause and only treat the symptoms, we are trapping ourselves in an ever growing cycle in which we become more and more dependent upon technology.

Evolution: The Missing Link in Medicine

“Nothing in biology makes sense except in the light of evolution.”

– Theodosius Dobzhansky

Evolution is arguably one of the most widely supported and powerful theories in all of biology, and potentially science as a whole. It has been a dominant explanation for over 100 years. Once genetics entered the picture in the first part of the 20th century, Darwin’s common descent and Mendel’s inheritance were improved upon, greatly expanded, and solidified into the new synthesis of evolution. Consistently verified through genetics, paleontology, geology, ecology, microbiology, and many other fields of science, evolution has become a pervasively potent field of study. It has created huge disciplinary offshoots – including evolutionary biology, evolutionary genetics, and evolutionary anthropology to name a few – and has become the theoretical foundation for all of biology.

Some people today argue that humans are no longer under evolutionary pressures, and, thus, are no longer evolving. Though this seems to make sense superficially, it is simply not true. The first issue is that humans only live about 80 years; a mere snapshot of our species’  existence. It is difficult to observe phenotypic differences as a result of biological evolution in only a few decades. That being said, scientists have found some very recent biological changes have occurred, including the altered expression of the FTO gene. The FTO gene codes for a protein that regulates appetite. While it does not “make” a person obese (genes tend to predispose, not determine), it has been correlated with obesity. The catch? It seems to have only been expressed after about 1940, according to a study published just 2 days ago. The study (which can be found here) found that, after 1942, the FTO gene showed a strong correlation with increase BMI. Why, though, would a gene that has not changed suddenly become active?

The Environment

What did change in the 1940’s? Technology. WWII offered an incredible economic boost to the US that massively increased technological enterprise and was the main contributing factor the world superpower status that the US achieved in the 40’s. As technology increased, labor decreased. After all, the main purpose of technology is to make human life simpler. When human life becomes simpler, people become more sedentary. New technology also allowed for cheaper, higher calorie, over-processed food. This one will take a while to work out. The difference could be epigenetic alteration, novel environmental stimuli, or even another gene interacting with FTO. While more testing will be needed to show exactly what happened in the early 40’s that altered FTO expression, the fact that something did occur, likely stemming from environmental changes, still remains. Biological evolution doesn’t have to be the changing of DNA sequence; that is far too simplistic. Anytime phenotypic or genotype ratios change on a species-wide level, evolution is occurring. No population is in Hardy-Weinberg equilibrium, and no population ever will be. Human wills continue to evolve biologically. While cultural evolution has exceedingly outpaced biological evolution, giving the mirage that biological evolution has “stopped,” the truth is that culture can either augment or stagnate biological evolution, depending upon the situation. A cultural change to drinking more milk may augment lactase persistence (and in fact, it did), while a cultural propensity to live in climate controlled housing year-round may slow other aspects of biological evolution. Nature doesn’t necessarily control natural selection; more broadly, the environment (cultural or natural) mediates evolution.

So, why is evolution important in medicine? Sure, doctors need to understand things like microbial evolution and how it plays a role in infectious diseases, but what about human evolution? How can a knowledge of human evolution impact medicine?

Cultural evolution has rapidly and drastically altered the human environment, thus changing how the human species evolves. More importantly, our environments have changed so aggressively that our bodies cannot keep up. (Before I go on, I have to make something clear. I am not a proponent of the Paleo Diet; if you’d like to know why, check out this post.) This means our bodies are often best adapted to the environments of the past (though these vary drastically). This has given rise to what are sometimes referred to as “mismatch diseases.” The list is extensive, but includes maladies such as atherosclerosis, heart disease, type-2 diabetes, osteoporosis, cavities, asthma, certain cancers, flat feet, depression, fatty liver syndrome, plantar fasciitis, and irritable bowel syndrome, to name a few. Some of these may not be actual mismatch diseases, but many of them likely are. Furthermore, many of these illnesses feed off one another, creating a terrible feedback loop. 100 years ago you’d likely die from an infectious disease; today, most people in developed nations will die from heart disease, type-2 diabetes, or cancer.

These diseases don’t have to be essential baggage of modernity. Anthropologists and (and some intrepid human evolutionary biologists) study modern day hunter-gatherer societies in order to glean information about the nature of our species pre-Neolithic Revolution. It’s important to note that these are not perfect models (cultural and biological evolution has still occurred in these hunter-gatherer societies), but are the best available. Interestingly, modern day hunter-gatherers don’t suffer from many of these mismatch diseases (This effect can’t be explained by longevity; hunter-gatherers regularly live into their late 60’s and 70’s. Though unusual to many of us, their lives aren’t as brutish as they are often portrayed). Diseases such as type-2 diabetes, hypertension, heart disease, osteoporosis, breast cancer and liver disease are rare among the societies. What’s more, myopia (near-sightedness), asthma, cavities, lower back pain, crowded teeth, collapsed arches, plantar fasciitis, and many other modern ailments are exceedingly rare. So what’s different? The easy answer is their diet, lifestyle, and environment. The difficult answer involves elucidating the physiological importance of certain social norms and biochemical processes of differing diets. Some very exciting work is beginning to arise in this field, dubbed “evolutionary medicine.”

Modern medicine and medical research focuses largely on treating problems, i.e., drugs and procedures that alleviate symptoms after the disease has manifested. While the cause is noble, and indeed necessary, it’s not enough. The childish logic of medical research creates a cycle of sickness-treatment that, in 2012, totaled almost $3 trillion in healthcare costs. Furthermore, the sedentary and Epicurean lifestyle in which many Americans live willingly feeds this cycle; among the less privileged, necessity feeds this cycle through the inability to afford healthy food, limited access to health education, and a sociocultural feedback loop that breeds its own vicious cycle.

There will likely never be a drug that can cure cancer (of which there are thousands of variants that can even differ between individuals who have the same variant), heart disease, type-2 diabetes, or many of the other previously mentioned noninfectious diseases. The rationale is akin putting water in your car’s gas tank and hoping additives will make it work as efficiently as gasoline. The car was built to run off gasoline. Similarly, your body has evolved to not eat an excessive amount of salts, carbs, and sugars (of which the different types, particularly glucose and fructose, do not have the same biochemical effects during digestion), sit for extended periods of time, wear shoes (particularly those with arch support; a common misconception is that arch support is good for you when, in fact, it weakens the muscles of the arch, leading to ailments such as collapsed arches and plantar fasciitis), read for several hours at a time, chew overly processed food, or many of the other things that people in developed nations commonly do, often times see as a luxury.

Modern medicine needs a paradigm shift. Funding needs to support not only treatments, but also investigations into prevention. The medical cause of diabetes may be insulin resistance, but what causes insulin resistance and how can we prevent it? Sugar may cause cavities, but what can do to prevent this? Shoes, even comfy ones, may cause collapsed arches, but how do we prevent this? The immediate response may be that this sort of prevention cannot be attained without abandoning modern technology all together. However, this isn’t the case, and it’s not the argument I’m trying to make. Research should focus on a broad range of interacting variable, including diet, work environment, school environment, and other aspects evolutionarily novel environments. Only after research from this evolutionary perspective takes place can constructive conversations and beneficial environmental changes occur. We don’t have to abandon modern society to be healthy; we just need to better understand how our lifestyle affects our bodies. Items such as smoking and alcohol are already age limited and touted as dangerous to health. Is junk food, particularly soda, any different? We don’t put age regulations on cigarettes or alcohol to protect bystanders. Instead, these regulations protect children who cannot be relied upon to make proper choices in their naivety. Should soda be under these same constraints?

If medicine and medical research does not undergo this paradigmatic shift and incorporate an evolutionary perspective, the outcome does not bode well for us. Medical costs will continue to rise with little room for improvement and greater opportunities for socioeconomic factors to play into the quality of healthcare available. This ad hoc treatment approach to medicine is not sustainable, and is not the best we can do.

Multiplex Automated Genome Engineering: Changing the world with MAGE

Humans have evolved a most unique mastery of toolmaking through advanced technology. As an extension of our biological bodies, technology has loosened the grip of natural selection. This is particularly true in the field of biomedicine and genetic engineering. We have the ability to directly alter the blueprint of life for any purpose we wish. Beginning in the 1970’s with the creation of recombinant DNA and transgenic organisms, genetic engineering has offered scientists the ability to study genes on a level that may not have seemed possible at the time. The field has provided a wealth of knowledge as well as practical implications, such as knockout mice and the ability to produce near-endless amount of human insulin for diabetics.

As of 2009, multiplex automated genome engineering (MAGE) has ushered in a new branch of genetic engineering – genomic engineering. We are no longer restricted to altering single genes, but rather are able to alter entire genomes by manipulating several genes in parallel. This new ability, brought about by MAGE technology, allows for nearly endless applications that stretch well beyond medicine or industry; agriculture, evolutionary biology, and conservation biology will benefit tremendously as MAGE technology progresses. Genetic engineering advancements such as MAGE are poised to revolutionize entire fields of science, including synthetic biology, molecular biology, and genetics by offering faster, cheaper, and more powerful methods of genome engineering.

Homologous Recombination

Genetic engineering underwent a revolutionary change in the 1980’s, largely due to the pioneering work of Martin Evans, Mario Capecchi, and Oliver Smithies. Evans and Kauffman were the first to describe a method for extracting, isolating, and culturing mouse embryonic stem cells. This laid the foundation for gene targeting, a method that was independently discovered by both Oliver Smithies and Mario Capecchi. Mario Capecchi and his colleagues were the first to suggest mammalian cells had the machinery capable for homologous recombination with exogenous DNA. Smithies took this a step further, demonstrating targeted gene insertion using the β-globin gene. Ultimately, the combined work of Evans, Smithies, and Capecchi on homologous recombination earned them the Nobel Prize in Physiology or Medicine in 2007. The science of homologous recombination has allowed for many scientific discoveries, primarily through the creation of knockout mice.

Homologous recombination works under many of the same principles are chromosomal recombination in meiosis, wherein homologous genetic sequences are randomly exchanged. The difference lies in the fact that homologous recombination works with exogenous DNA and on a gene level rather than chromosomal level.

1

2

The method works by using a double stranded genetic construct with flanking regions that are homologous to the flanking regions of the gene of interest. This allows for the sequence in the middle, containing a positive selection marker and new gene, to be incorporated. The positive control should be something that can be selected for, such as resistance to a toxin or a color change. Outside of one of the flanking regions of the construct should lie a negative selection marker; the thymidine kinase gene is commonly used. If homologous recombination is too lenient, and the thymidine kinase gene is incorporated into the endogenous DNA, it can be detected and disposed of. This is to prevent too much genetic information from being exchanged.

Using this method, knockout mice can be created. A knockout mouse is a mouse that is lacking a functional gene, allowing for elucidation of the gene’s function. Embryonic stem cells are extracted from a mouse blastocyst and introduced to the gene construct via electroporation. The successfully genetically modified stem cells are selected using the positive and negative markers. These are isolated and cultured before being inserted back into mouse blastocysts. The mouse blastocysts can then be inserted into female mice, producing chimeric offspring. These offspring may be mated to wild-type mice. If the germ cells of the chimeric mouse were generated from the modified stem cells, then the offspring will be heterozygous for the modified gene and wild-type gene. These heterozygous mice can then be interbred, with a portion of the offspring being homozygous for the modified gene. This is the beginning of a mouse line with the chosen gene “knocked-out.”

3

Multiplex Automated Genome Engineering Process

The major drawback of the previously described method of “gene targeting” is the inability to multiplex. The process is not very efficient, and targeting more than one gene becomes problematic, limiting homologous recombination to single genes. In 2009, George Church and colleagues solved this issue with the creation of multiplex automated genome engineering (MAGE). MAGE technology uses hybridizing oligonucleotides to alter multiple genes in parallel. The machine may be thought of as an “evolution machine,” wherein favorable sequences are chosen at a higher frequency than less favorable sequences. The hybridization free energy is a predictor of allelic replacement efficiency. As cycles complete, sequences become more similar to the oligonucleotide sequence, increasing the chance that those sequences will be further altered by hybridization. Eventually, the majority of endogenous sequences will be completely replaced with the sequence of the oligonucleotide. This process only takes about 6-8 cycles.

4

After the E. coli cells are grown to the mid log phase, expression of the beta protein is induced. Cells are chilled and the media is drained. A solution containing the oligonucleotides is added, followed by electroporation. This step is particularly lethal, killing many of the cells. However, the cells are chosen based on positive markers (optional, but increases efficiency) and allowed to reach the mid-log phase again before repeating the process. Church and his colleagues have optimized the E. coli strain EcNR2 to work with MAGE. EcNR2 contains a plasmid with the λ phage genes exo, beta, and gam as well as being mismatch gene deficient. When expressed, the phage genes will help keep the oligonucleotide annealed to the lagging strand of the DNA during replication, while the mismatch gene deficiency prevents the cellular repair mechanisms from changing the oligonucleotide sequence once it is annealed. Using an improved technique called co-selection MAGE (CoS-MAGE), Church and colleagues created EcHW47, the successor to EcNR2. In CoS-MAGE, cells that exhibit naturally superior oligo-uptake are selected for before attempting to target the genes of interest.

MAGE technology is currently in the process of being refined, but shows incredible promise in practical applications. Some of the immediate applications include the ability to more easily and directly study molecular evolution and the creation of more efficient bacterial production of industrial chemicals and biologically relevant hormones. Once the technique has been optimized in plants and mammals, immediate applications could be realized in GMO production and creation of multi-knockout mice that will give scientists the ability to study gene-gene interactions on a level previously unattainable. A more optimistic and perhaps grandiose vision could see MAGE working towards ending genetic disorders (CRISPR technology, an equally incredible genomic editing technique, may beat MAGE there) and serving as a cornerstone technique in de-extinction. The ability to alter a genome in any fashion brings with it immense power. The possibilities for MAGE are boundless, unimaginable, and are sure to change genomic science.

For more information on Homologous recombination, see:

http://www.bio.davidson.edu/genomics/method/homolrecomb.html

For more information on MAGE, see:

Wang, H. H., Isaacs, F. J., Carr, P. A., Sun, Z. Z., Xu, G., Forest, C. R., & Church, G. M. (2009). Programming cells by multiplex genome engineering and accelerated evolution. Nature, 460(7257), 894-898.

Wang, H. H., Kim, H., Cong, L., Jeong, J., Bang, D., & Church, G. M. (2012). Genome-scale promoter engineering by coselection MAGE. Nature methods, 9(6), 591-593.

For more information on CRISPR (which I highly recommend; it’s fascinating), see:

https://www.addgene.org/CRISPR/guide/

Ebola – Not The Threat To the US That You Think It Is

About 900 people have died of Ebola in the last 6 months. Should you be worried? If you don’t want to read the post, and are just looking for an answer: NO!

If you’re one of the people who is saying, “But they are bringing 2 Ebola patients to the U.S., and a man in New York is suspected of having the disease!!” then please, for the sake of everyone around you, keep reading.

Let’s take a look at Ebola first. What exactly is it? Ebola Virus Disease (in humans) is caused by one three species of Filoviridae viruses: Zaire, Sudan, and Bundibugyo. There are two other Ebola species, but they do not affect humans. Of the three species mentioned, Ebola Zaire is the nastiest, with anywhere from a 50-90% fatality rate – closer to 50% with supportive care and closer to 90% with no supportive care. Ebola is a hemorrhagic fever disease, characterized by high fever, shock, multiple organ failure, and subcutaneous bleeding. Typically, the patient first shows flu-like symptoms before progressing to the more characteristic bleeding symptoms. If the virus itself doesn’t kill you, oftentimes your own immune system will spiral out of control and send you into shock, and, most often, death.

Now that the bad part is over, let’s look at why it’s not as scary globally as you might think.

Strengths:                                                        Weaknesses:

High mortality rate                                      Only spread through body fluids

Ambiguous early symptoms                   Lacks a ubiquitous vector

3 – 21 day incubation period                  Physically impairs the victim

No treatment or cure

Although the number of strengths outweighs the number of weaknesses, the quality of the weaknesses far outweighs the strengths. Without being airborne or transmitted by some ubiquitous vector, it is unlikely that any disease will ever cause a pandemic (meaning, global effects). In addition to this, Ebola impairs its victims. Even the flu-like symptoms are enough to sway you from much human contact. The scariest part about Ebola is the incubation period. Someone may not show symptoms for up to 3 weeks after being exposed to the virus. While this, in concert with the ambiguous early symptoms, might keep the flame flickering, it isn’t enough to start a wildfire. Still not convinced? Let’s put the outbreak into perspective:

We are currently experiencing the largest and most deadly Ebola outbreak in recorded history. The death toll is almost to 900 in 6 months – less than the number of people who die every 6 months from hippopotamus attacks. The Spanish Flu of 1918, undoubtedly the worst pandemic in the history of mankind, infected about 30% of the people in the world and killed anywhere from 3-5% of the global population in a single year. If you see a grave marker with a death date anytime in 1918, chances are greater than not that the individual died from Spanish Flu. This astounding death toll was accomplished WITHOUT the advent of modern travel, i.e., no airplanes. The current Ebola outbreak has killed 900 people, or about 0.0000001% (1/10 millionth of a percent)of the world population. 900 out of about 7.3 billion people worldwide. Oh yea, the other thing? Ebola isn’t worldwide. It’s in Western Africa.

Slide1

The only time Ebola has ever really been outside of Africa is… well… never. The closest we’ve come to that is recently bringing two patients to the US for treatment. 2 patients that will likely not even be exposed to US air or land for the next 2 weeks, as they were flown in on a plane with a quarantine chamber and are now isolated in a hospital ward in one of the top hospitals in the US.

I’m not trying to downplay the seriousness of Ebola from the safety of my suburban coffee shop. Yes, it would be scary if I were living in Sierra Leone. Not so much because I have a high chance of contracting Ebola, but because I don’t know where it might be lurking. And, if I did contract it, I’d be more miserable and frightened in the next two weeks than I’d ever been in my life. I would only be relieved of this misery by multiple organ failure and bleeding out of eyes until I died, or the less likely chance that I survive. Ebola is a terrible, nasty disease, but it’s not a global threat nor is it a U.S. threat.

Biocentrism – An Alternative “Theory of Everything.”

For a long time, physicists have dreamt of a unifying “Theory of Everything” that would amalgamate every physical aspect of the universe into one packaged theorem. As of now, physics hangs in the balance between Einstein’s Theory of General Relativity (GR), which does a pretty great job of explaining relationships between macrocosmic entities, and Quantum Field Theory (QFT), which does an excellent job of showing that GR is wrong on the microcosmic scale, but we aren’t sure why. Both have tremendous explanatory power (though nobody really knows what QFT is actually saying), but, unfortunately, are incompatible cosmologies. Subatomic particles, explained by QFT, simply don’t fit the laws of GR. God may not “play with dice,” as Einstein put it, but apparently he does roll subatomic dice. Truly, QFT embodies Aristotle’s maxim, “the more you know, the more you know you don’t know.” More recently, physics has also devised String Theory, of which various versions can be incorporated into a multi-dimensioned theory known as “M-Theory.” M-Theory also has incredible explanatory power, accounting for all of the fundamental forces and types of matter. It’s a great hypothetical framework, but lacks a practical aspect that is necessary in any strong scientific theory, making it about as believable as any other cosmological mythology. While these “Big 3” theories are all contained within the realm of physics, Robert Lanza claims there is a 4th, more appropriate explanation. And it lies within the realm of biology.

Robert Lanza is a leading stem cell researcher and Chief Scientific Officer at Advanced Cell Technology. He is one of the world’s most respected biologists, having been mentored by giants in a variety of scientific disciplines, including Jonas Salk (developed the first Polio vaccine), B.F. Skinner (famous psychologist and behaviorist), and Christiaan Barnard (performed the 1st heart transplant). In other words, Lanza has a lot to lose, and likely wouldn’t tarnish his reputation on something he didn’t deem worthwhile.

In his book, “Biocentrism,” Lanza offers a cosmology situated within the field of biology, specifically within consciousness. Regarding Biocentrism, Lanza notes 7 principles. I will list them all and then take a closer look at each one:

  • What we perceive as reality is a process that involves our consciousness. An “external” reality, if it existed, would – by definition – have to exist in space. But this is meaningless because space and time are also not absolute realities but rather tools of the human and animal mind.
  • Our external and internal perceptions are inextricably intertwined. They are different sides of the same coin and cannot be divorced from one another.
  • The behavior of subatomic particles – indeed all particles and objects – is inextricably linked to the presence of an observer. Without the presence of a conscious observer, they at best exist in an undetermined state of probability waves.
  • Without consciousness, “matter” dwells in an undetermined state of probability. Any universe that could have preceded consciousness only existed in a probability state.
  • The structure of the universe is explainable only through biocentrism. The universe is fine-tuned for life, which makes perfect sense as life creates the universe, not the other way around. The “universe” is simply the complete spatio-temporal logic of the self.
  • Time does not have any real existence outside of the animal-sense perception. It is the process by which we perceive changes in the universe.
  • Space, like time, is not an object or a thing, Space is another form of our animal understanding and does not have an independent reality. We carry space and time around with us like turtles with shells. Thus, there is no absolute self-existing matrix in which physical events occur independent of life.

That’s a lot to sort out. Let’s start with the first principle:

  • What we perceive as reality is a process that involves our consciousness. An “external” reality, if it existed, would – by definition – have to exist in space. But this is meaningless because space and time are also not absolute realities but rather tools of the human and animal mind.

The second half of this tenet is incorporated into the 6th and 7th, so I will just take a look at the first half. Yes, what we perceive as reality is a process that involves out consciousness, regardless of the sense that is used to perceive. And, yes, an external reality would need to exist in some sort of space, as it is external from our own perceptive machine, i.e., the brain. Our senses indeed mean nothing without consciousness. Similarly, you can perceive things that are not there, or misperceive a sound for, say, a color. For more information on this, look up synesthesia.

Now for the second:

  • Our external and internal perceptions are inextricably intertwined. They are different sides of the same coin and cannot be divorced from one another.

Again, I don’t see a problem with this statement. What you “see,” “touch,” “smell,” “taste,” or “hear” are all meaningless without interpretation from the brain, or consciousness. If I think about the number 4, the conscious process is not so different from when I see the number 4 and my brain interprets the meaning of the symbol.

So far so good, what about 3?

  • The behavior of subatomic particles – indeed all particles and objects – is inextricably linked to the presence of an observer. Without the presence of a conscious observer, they at best exist in an undetermined state of probability waves.

This one assumes that the physical instantiation of the whole is a sum of its parts. On a basic level, this makes as much sense as anything else. If the subatomic particles exist as waves when unobserved (see the “double slit experiment” for details), then so should the things that they compose. There can be some points of contention for this generality. For example, sodium (Na) and Chlorine (Cl) are both pretty dangerous to humans in elemental form. However, when they come together, they create sodium chloride, or table salt, the main ingredient in those delicious french fries. Perhaps subatomic particles have some yet undetermined attribute that causes their fundamental composition to change when combined together. This, however, seems unlikely. Then again, it’s quantum mechanics. Everything in quantum mechanics seems unlikely. Ultimately, this principle passes on the grounds of simple logic, but could be troublesome due to misunderstood properties of subatomic particles.

The fourth is linked to the 3rd:

  • Without consciousness, “matter” dwells in an undetermined state of probability. Any universe that could have preceded consciousness only existed in a probability state.

This principle follows logically from the third. Namely, matter, composed of subatomic particles, seems to exist as a wave until observed (i.e., perceived consciously). Thus, a pre-conscious universe would exist as a wave, suggesting it exists only as a probability.

The 5th is perhaps the shakiest principle:

  • The structure of the universe is explainable only through biocentrism. The universe is fine-tuned for life, which makes perfect sense as life creates the universe, not the other way around. The “universe” is simply the complete spatio-temporal logic of the self.

Lanza is jumping the gun a little here. Yes, it is possible that life (or consciousness) “creates” the universe, as the the subatomic particles that compose it might exist only as a wave in the absence of consciousness (or a conscious observer). As such, how we make sense of the things we perceive using space and time – the spatio-temporal logic of the self – is essentially the “universe.” The claim is bold, but not completely out in left field.

The 6th explains part of the 5th:

  • Time does not have any real existence outside of the animal-sense perception. It is the process by which we perceive changes in the universe.

For those unacquainted with physics or neuroscience, this seems radical. In fact, it even seems a little radical for those that are in the fields. However, it seems to be true. Time is a tool. Our brains are wired for connecting the dots. In order to connect the dots, we need a connector. This connector is time. Think of time as not so different from measuring length, weight, or any other attribute. If you can’t imagine how time might not exist, try imagining a world in which we cannot measure length. Yes, length exists, but need a ruler to compare them. As for events, they exist, and we use time to compare them. If this still does not make sense, do some independent research on the topic. It’s difficult to explain, and I’m certainly not the most qualified individual to do the explaining. However, it doesn’t defy anything really, so this one can be accepted as well.

And now to wrap things up with the 7th principle:

  • Space, like time, is not an object or a thing, Space is another form of our animal understanding and does not have an independent reality. We carry space and time around with us like turtles with shells. Thus, there is no absolute self-existing matrix in which physical events occur independent of life.

Again, space is how we compare our perceptions. It’s another “measuring stick” for what we perceive, much like time. In fact, quantumly entangled particles seem to defy space and time. They react instantaneously – faster than the speed of light. The faster you move, the smaller space becomes and the less time affects you. So space and time can both be changed depending upon the circumstance. This, taken with the fact that our consciousness is what interprets, and thus “creates” the universe, are Lanza’s main points for a non-existent physical universe. Again, this is difficult to comprehend, and I’m sure I muddy the picture more so than others. But, look into it and you might understand it more clearly.

So, those are the 7 principles of Biocentrism. On the surface, they seem to make sense. At least, as much sense as a physically oriented view of the universe. They’re just incredibly strange and require a complete paradigmatic shift in order to comprehend. The biggest thing that stuck out to me while reading Lanza’s book and claims of Biocentrism was the inception of the universe. According to Lanza, and perhaps other proponents of the Participatory Anthropic Principle (PAP), the universe existed in an indeterminate state before consciousness. Once consciousness arrived, the universe could be observed and thus materialize. However, wouldn’t the consciousness, embodied by a conscious being, only be a probability unless observed? It seems paradoxical.

Now, I don’t necessarily ascribe to biocentrism. I do think it explains a lot in a very interesting fashion, but it lacks the same practical testability and thus falsifiability that M-Theory lacks. Perhaps, as we begin to further understand QFT (or understand it at all) we will be able to better manipulate and experiment upon subatomic particles, thus providing evidence either for or against M-Theory and Biocentrism. Until then, these two cosmologies are too theoretical to act as a standing “Theory of Everything.” At any rate, I would definitely recommend the book. It’s an excellent, thought-provoking read that will challenge the way you see the world. You have to approach it with an open mind. A background in science wouldn’t hurt, but Lanza does a pretty good job of explaining concepts. I’ll end with a quote, again from Aristotle:

“It is the mark of an educated mind to be able to entertain a thought without accepting it.”

Keep that in mind if you read the book.