Why Cultural Appropriation Matters

Cultural appropriation is a tricky topic to unpack and explain in a manner that keeps the attention of those who believe it to be “PC crap,” but also doesn’t dampen the significance of the issue. But we should try anyway.

I’ve no doubt played a role in cultural appropriation throughout my life, with no bad intentions or awareness that I was doing anything harmful. Growing up in okla humma, Choctaw for “Red People,” I was surrounded by Native American culture. Half of the cities I can name in Oklahoma derive from a Native American word or phrase in the language of one of the 67 tribes represented in the state. You can buy dream catchers and arrowheads at gas stations along the interstate, and Oklahoma museums have some of the largest Native American collections in the world. The designation of Oklahoma as Indian Territory in the 19th century laid the foundation for the incredibly complex and muddled mixing of unique cultures that white people typically lump into “Native American” culture. This amalgamated meta-culture, if you will, has been commodified into a staple of Oklahoma tourist attractions and local affairs. To those born here, the combined Native American culture is a frequent part of every day life, even though many don’t understand the significance of the cultural artifacts in their original context.

Cultural appropriation is often so easily recognized and understood by the trained eye (e.g., the anthropologist) and the appropriated (e.g., Native Americans), yet so unrecognizable to the appropriator (often white Americans). Perhaps white Americans have trouble recognizing cultural appropriation because many of them do not have deeply rooted cultural expressions that hold sociohistorical significance. I am not saying whites don’t have a culture; there is a rich variety of white culture. However, white Americans don’t typically have a long history of ethnically derived cultural expressions that hold significance in their lives and the lives of those around them. It is possible that this lack of connection with a uniquely defined, deeply rooted culture makes it more difficult for white Americans to easily understand when something is appropriated. Perhaps an analog to which many white Americans, particularly among the South, can relate would be a misappropriation of religious traditions or symbols. If a member of a decidedly un-Christian group wore symbols of Christian religious significance while ignoring the Christian religion and the context in which those symbols were significant, the Christian community would likely be upset. This might be particularly true if the symbols are masqueraded in a fashion that almost mocks or perverts the significance and meaning of the symbol. In this hypothetical situation, I imagine the argument would be that those individuals do not represent Christianity, and are giving others a diluted and unrepresentative image of what Christians are. Much is the same when a white American frivolously wears a Native American headdress.

It’s tough to even find an appropriate metaphor that accurately represents a white American experiencing cultural appropriation. In the US, white culture is the dominant culture, and thus cannot experience systemic oppression of its culture. This isn’t to say white Americans are intentionally oppressive or insensitive to other cultures. Often the material culture being appropriated is made out to be a fashion statement or décor due to its intriguing and desirable properties. There’s nothing inherently wrong with purchasing a dream catcher or some arrow earrings for fashion. The problem occurs when those cultural expressions are used without any understanding of their significance within their original context. Cultural appropriation is only a “thing” because discrimination still exists, albeit to a much less degree than in the past (overtly, at least). The issue now is not necessarily that individuals are oppressive, per se, but that institutional discrimination exists. When a company mass produces an appropriated item, it quickly loses its significance. Crosses, for example, have become a trend to such an extent that they hold much less cultural significance today than they originally did. Even so, Christianity is the predominate religion in the US, and cannot be robbed of its symbols quite the same as a minority culture’s symbols. It’s much easier for your voice to be heard and the significance of your culture’s symbol to be realized when the majority of people living around you understand the significance in the original context.

The biggest issue with cultural appropriation is that the significance of an appropriated expression, a unique human perspective on the world, is lost, diluted to a fashion statement that will die out within the decade. Human cultural diversity is rapidly declining as the world becomes more and more globalized. Globalization isn’t inherently bad, so long as the idiosyncratic views on human nature and the world are not lost along the way as minority groups attempt to attain equality through assimilation. Cultural appropriation not only thins the world’s cultural diversity, but also reinforces stereotypes of minority cultures and perpetuates ignorance, which further deteriorates any cultural pride that group of people may have. Once your culture is universally mocked and turned into a casual commodity by the dominant culture, why bother preserving it?

If we want to preserve the wonderful array of cultural diversity that we currently have, we must be more aware of how we borrow practices or symbols from the minority cultures. There is nothing inherently wrong with buying dream catcher earrings for a fashion statement; however, do your part in preserving the culture by learning a little something about the item your wearing. Cultural exchange can be a powerful tool in the preservation of marginalized cultures, but only if the significance in the original cultural context is preserved along with the material culture. Simply make sure you aren’t perverting something that is sacred to another group of people, and give credit where credit is due. If you’re using something that is interesting or unique, see what the story is behind it. Learning what something truly means, telling the story alongside the material, will likely to make it even more interesting while also helping preserve its cultural significance.

Why Can’t Rachel Dolezal be Black?

The news of Rachel Dolezal as someone who has “pretended” to be black came to light at an interesting time. A few weeks ago, former decathlon gold medalist Bruce Jenner, came out as a trans-woman, henceforth identifying as Caitlyn Jenner. It is broadly accepted among academics that gender and sex are not the same thing; sex is a biological reality, and gender is a social role that someone fills in society. While biological sex tends to be binary (male/female, with the exception of things such as intersex), gender can be seen as more of a spectrum. So how does this relate to race, or does it?

As a preface, I am not suggesting that gender is culturally equivalent to race, though both are cultural constructs and neither are biological realities. Race is a manner in which people are classified by phenotypic characteristics (often skin color), while gender, though often defined by phenotypic characteristics, describes a role in society. Race does not define a societal role. That being said, there are similarities between race and gender insofar as both relate to social identity and both can be seen as a spectrum.

I’ve read a number of articles and comments, from liberal and conservative posts alike, suggesting that one cannot be “transracial.” But, if race is a cultural construct typically defined by skin color (though other features, such as facial structure and hair are often included), then why can’t someone “be black” if they fit these descriptions? It seems to me that any notion otherwise would suggest that race is tied to some other biological or sociohistorical reality. If you’re thinking, “but isn’t race tied to ethnicity?” then try again.

Race and ethnicity are not equivalent. Ethnicity is the classification of people through such things as shared cultural patterns, histories, and language. As with race, there is no biological reality to ethnicity. “Ethnic groups” tend to migrate and breed together over time, and thus can be genetically estimated using very small changes to non-functioning portions of the genome. However, this says nothing of the biological nature of ethnicity. I may measure a table by its color, but the color adds nothing to the nature of the table. It’s simply a variation. Neither race nor ethnicity is tied to biology, nor are they necessarily related to one another. A racially “black person” in the US may be from any country or ethnicity. All that matters is that they “look black.”

Race, although not a biological reality, has very real consequences in society. Social constructs can be equally as real and have just as much of an effect on life as any biological reality. I could list statistics showing how “being black” tends to put one at a social and economic disadvantage, but those stats are easy to find and have been iterated so many times that posting them here would not add any credibility to what I am saying. Rachel did not grow up “black,” and was not subject to the same realities as those born into African American families. However, she has been living her adult life, day and night, as a “black woman” who is subject to those realities based on appearance.

Some have called what Rachel Dolezal did “cultural appropriation,” referring the adoption of certain cultural practices or symbols (typically a majority culture adopting from a minority one) outside of their cultural context. Cultural appropriation is often performed in a manner that downplays the significance of the practice or symbol in its original sociocultural context, making it a sort of popular commodity among the (often more socially privileged) majority.

Nobody can be sure of Rachel’s motives except Rachel. I have no idea how she feels about her role in society nor do I understand her perceived identity. However, I do not think she has appropriated African American culture. Cultural appropriation typically involves the adoption practices/symbols with no intention of fully immersing oneself in the context or even attempting to understand the context. Regardless of Rachel’s intentions, it doesn’t seem she did this. In fact, she did the exact opposite to an extreme. The details of her life are just beginning to surface, and there will no doubt be several conflicting versions. But one thing seems very clear: Rachel did not want anyone to think she was white. She lived as a black individual for a long time, fully immersing herself into African American culture. From the information out as of now, she didn’t “switch” between races. She identified in society as black, day and night. If the rest of society saw her as black, then she would have been treated, physically, socially, and mentally, as a black individual, no different than any other black individual.

Nearly everyone who knows Caitlyn Jenner as a woman also knows she was, at one point, a man. In the case of Rachel, it seems that the people around her did not know she was born “white.” Deception, or immersion? Many media outlets have been treating her “deception” as foolish and an example of bad judgment on her part. However, I’m not sure this is the case. She didn’t change her race to gain any sort of fame, and I don’t see her doing it to gain any sort of social, economic, or political advantage. Yes, she was the president of the Spokane NAACP. However, the NAACP has stated that racial identity is not taken into account during the hiring process. It also doesn’t seem that Rachel has done any harm to the African American image or community. In fact, she has done just the opposite, given her career and activism. It appears that Rachel has truly tried to become “culturally black,” and physical appearance must be a part of that transition. If Rachel had taken the same route as Caitlyn Jenner, and tried to publicize her transition, I’m not sure the response would have been any better, and her credibility may have suffered just as much.

There is plenty about Rachel’s case that I do not know. I do know she lied about her father being black, and her adopted brother being her son. Was this right? Probably not, and I’m not sure how this plays into her credibility of considering herself “black.” Perhaps this was an honest attempt to seal her transition. However, I don’t see that she has done any harm, nor do I see why an individual cannot become a part of another culture so long as they are willing to commit to that culture rather than only be a part of that culture superficially or when it best suits them. Caitlyn Jenner didn’t transition to a woman to gain any advantage, and I don’t think Rachel transitioned to being racially black to gain any advantage.** Perhaps she simply did so because she liked African American culture more than white culture. But, does that make it wrong? I’m not sure it makes it any more wrong than someone wanting to live in a different country because they enjoy the sociocultural characteristics more than their home country. That being said, Rachel grew up white, and did not experience the same hardships as someone born with dark skin. But she made the decision to be seen in society as having black skin, and from that point forward was viewed under the same social lens as someone who was born with black skin. Unless race is explicitly tied to biology (it isn’t), ethnicity (it isn’t), or some other thing other than physical perception (enlighten me?), then for all intents and purposes, Rachel Dolezal was black.

**Note: Gender transition and racial transition are not equivalent. However, similarities may be noted in the manner that the transition occurs and the motivation behind it.

The Role of Science in Society

Introduction

Carl Sagan once stated, “… the consequences of scientific illiteracy are far more dangerous in our time than in any time that has come before.” This statement becomes truer every day, as scientific and technological innovations are occurring at an ever-increasing rate. Studies suggest that less than 30% of Americans are “scientifically literate,” meaning that over 70% of Americans would have trouble reading – and understanding – the science section of the New York Times. So, why is this important? After all, everyone has their strengths and weaknesses.

The problem with this view is that science is a driving force behind our sociocultural evolution. New ideas and new inventions are constantly redefining how we live our lives. As time goes on, science and technology will define most of life as we live it. Already, this is true. 100 years ago, people often lived day by day without electricity. Today, the most frightening thing most people could imagine would be a total loss of electricity. Imagine all of the things that simply wouldn’t work without it: phones, televisions, the Internet, lighting, heat and A/C, automobiles, and virtually anything that is manufactured. We have built a society in the United States that is almost entirely dependent upon electricity. Personally, It’s difficult for me to imagine a world without electricity because everything I know is based on upon it. The Internet is the “electricity” of the 21st century. Much of what we do relies on the Internet, which in turn relies on electricity. These are just the two biggest examples. Life has become relentlessly complex and multifaceted. Most people have no idea how the world around them – that is, this artificial world, or anthropogenic matrix – functions.

As time goes on, our day-to-day lives will become less and less “natural” and more and more artificial. This is not inherently bad. However, it does raise the standards for what we must understand about how the world – especially our anthropogenic matrix – works. Failing to keep a basic understanding of science and technology is destined to segregate the population, facilitating the rise of an “elite” few, resembling more of an oligarchy than a representative democracy. I’m not much of a conspiracy theorist, and I don’t mean to imply that a “New World Order” is going to secretly control our lives. I do, however, think that if nothing is done about our general ignorance of science, we will slip away from the Democracy that we claim to love so dearly. How? How can ignorance of science and technology lead to the failure of democracy? After all, you can vote regardless of your scientific literacy. It’s true, you can vote while being grossly ignorant of how the world works, which is part of the issue. To be clear, I do not think that there should be any kind of scientific literacy test in order to vote. This would only serve as fuel for the ever-broadening gap between those who understand science and those who don’t. In a democracy, everyone should be able to vote. However, given the state of knowledge that we currently have and the increasingly complex world in which we find ourselves, uneducated voting has disastrous consequences.

A Little Politics

Politics is, in its most basic form, the practice of influencing a population. This is done by verbally persuading people to get behind an action that will be set in motion order to guide the population down a particular path of life. The United States is a representative democracy, which means officials are elected by the public to govern the public. The United States is not a simple representative democracy; many modifications are set in order to give the minority a voice. However, in light of these modifications, “majority rules” is still the rule of thumb. On its surface, a “majority rules” system of operations seems ideal. Going with what most people want or believe is the best thing to do seems like a solid idea. I certainly agree that this is typically a good philosophy – given that those voting are educated on the matter at hand.

The Modern Intersection of Science and Government

The base of everything in our lives is built from science; it makes up our infrastructure. When a politician makes a motion to change or regulate something, he or she is making a change that affects our anthropogenic matrix, and, consequently, the natural world in which our matrix operates through such acts as deforestation, ozone depletion, species extinction, etc. If an individual does not have a basic understanding of how the world works, then how can that individual possibly make a good decision on electing a public official? Even worse, ignorance of science and technology (not to mention poor reasoning and logical evaluation skills) leads to a vote based solely on emotion and surface similarity. If you know nothing about a subject, you cannot make an educated decision regarding that subject. If not based on an educated understanding, something else must be the base on which you make decisions. The next best choice would be decisions based on reason and logic. Unfortunately, critical thinking is MIA in many educational settings. Science acts as a major source of training by which people learn to reason and form logical conclusions. In turn, many – though not all – who base their decisions on logical reasoning are in the same group of people who base their decisions on knowledge of science. With less than 30% of the public being scientifically literate, at least half of the population’s base for political decisions is unaccounted for.

If you don’t use a knowledge of science to aid in political decision-making, it’s likely that you are more swayed by charisma and emotional triggers. Those candidates who are more like you – or at least are ostensibly like you – are more likely to sway your opinion. After all, that’s what politics is all about – persuading people to get behind an action that will be set in motion order to guide the population down a particular path. As a politician, if most of your constituents are not scientifically literate, then you are less likely to use science as a persuasion tactic and more likely to use charisma and emotionally charged wording that harks back to a tradition familiar to many of your constituents. Though not a valiant method of persuasion, it is a smart one. Unfortunately, this only perpetuates the current epidemic of scientific illiteracy.

Why Public Knowledge of Science Matters

One major problem with scientific illiteracy is that politicians can make a poor decision – intentionally or unintentionally – with no one to call them out. Regulations or the lack thereof concerning issues such as climate change, medical research, and irresponsible use of resources must be made based on science, as this is the process by which we understand these matters. Thus, if a politician uses a non-scientific basis for creating laws (a basis fueled by a constituency who is scientifically illiterate and, perhaps, an ulterior motive such as monetary stock in the decision), then consequences are sure to ensue. The effects can be immediate, such as lack of funding for medical research, or delayed (and perhaps even more disastrous), such as not addressing anthropogenic climate change.

Politics aside, understanding science and technology are imperative to functioning in our ever increasingly technological world. 100,000 years ago, one had to be an adept hunter in order to be a contributing member of society; 10,000 years ago, one had to be adept in agriculture; today, we must stay informed on, at the very least, the basics of science. This includes environmental science, molecular biology, conservation biology, and genetics, among others. Expertise is not required for social and political progress, but awareness is essential.

Ebola – Not The Threat To the US That You Think It Is

About 900 people have died of Ebola in the last 6 months. Should you be worried? If you don’t want to read the post, and are just looking for an answer: NO!

If you’re one of the people who is saying, “But they are bringing 2 Ebola patients to the U.S., and a man in New York is suspected of having the disease!!” then please, for the sake of everyone around you, keep reading.

Let’s take a look at Ebola first. What exactly is it? Ebola Virus Disease (in humans) is caused by one three species of Filoviridae viruses: Zaire, Sudan, and Bundibugyo. There are two other Ebola species, but they do not affect humans. Of the three species mentioned, Ebola Zaire is the nastiest, with anywhere from a 50-90% fatality rate – closer to 50% with supportive care and closer to 90% with no supportive care. Ebola is a hemorrhagic fever disease, characterized by high fever, shock, multiple organ failure, and subcutaneous bleeding. Typically, the patient first shows flu-like symptoms before progressing to the more characteristic bleeding symptoms. If the virus itself doesn’t kill you, oftentimes your own immune system will spiral out of control and send you into shock, and, most often, death.

Now that the bad part is over, let’s look at why it’s not as scary globally as you might think.

Strengths:                                                        Weaknesses:

High mortality rate                                      Only spread through body fluids

Ambiguous early symptoms                   Lacks a ubiquitous vector

3 – 21 day incubation period                  Physically impairs the victim

No treatment or cure

Although the number of strengths outweighs the number of weaknesses, the quality of the weaknesses far outweighs the strengths. Without being airborne or transmitted by some ubiquitous vector, it is unlikely that any disease will ever cause a pandemic (meaning, global effects). In addition to this, Ebola impairs its victims. Even the flu-like symptoms are enough to sway you from much human contact. The scariest part about Ebola is the incubation period. Someone may not show symptoms for up to 3 weeks after being exposed to the virus. While this, in concert with the ambiguous early symptoms, might keep the flame flickering, it isn’t enough to start a wildfire. Still not convinced? Let’s put the outbreak into perspective:

We are currently experiencing the largest and most deadly Ebola outbreak in recorded history. The death toll is almost to 900 in 6 months – less than the number of people who die every 6 months from hippopotamus attacks. The Spanish Flu of 1918, undoubtedly the worst pandemic in the history of mankind, infected about 30% of the people in the world and killed anywhere from 3-5% of the global population in a single year. If you see a grave marker with a death date anytime in 1918, chances are greater than not that the individual died from Spanish Flu. This astounding death toll was accomplished WITHOUT the advent of modern travel, i.e., no airplanes. The current Ebola outbreak has killed 900 people, or about 0.0000001% (1/10 millionth of a percent)of the world population. 900 out of about 7.3 billion people worldwide. Oh yea, the other thing? Ebola isn’t worldwide. It’s in Western Africa.

Slide1

The only time Ebola has ever really been outside of Africa is… well… never. The closest we’ve come to that is recently bringing two patients to the US for treatment. 2 patients that will likely not even be exposed to US air or land for the next 2 weeks, as they were flown in on a plane with a quarantine chamber and are now isolated in a hospital ward in one of the top hospitals in the US.

I’m not trying to downplay the seriousness of Ebola from the safety of my suburban coffee shop. Yes, it would be scary if I were living in Sierra Leone. Not so much because I have a high chance of contracting Ebola, but because I don’t know where it might be lurking. And, if I did contract it, I’d be more miserable and frightened in the next two weeks than I’d ever been in my life. I would only be relieved of this misery by multiple organ failure and bleeding out of eyes until I died, or the less likely chance that I survive. Ebola is a terrible, nasty disease, but it’s not a global threat nor is it a U.S. threat.

Genesis 1: A Story of Functional Creation, not Material Creation.

Introduction

When looking to the Bible for information on mankind’s purpose, many modern Christians tend to overlook Genesis. On its surface, the articulation of Genesis 1 appears as an account of material ontology, or material creation. However, this understanding of the creation account is superficial and requires no investigation into the text, or more importantly, the sociocultural context of the literature. In fact, I will make the argument that reading Genesis 1 as an account of material ontology is not faithful to the original intention or reception of the passage. In lieu of a material ontological reading, I argue for a functional ontological reading of Genesis 1 that stays true to the text and dispels much of the contemporary debate in Christian cosmogony (belief of universe origins).

The purpose of a functional interpretation of Genesis 1 is not to abolish inconsistencies in Christian cosmogony (though this is accomplished in the process), but rather to give a more insightful and meaningful reading of the text that communicates to the modern reader the same message that it communicated to the ancient Hebrew listener. In order to fully appreciate the intention of the text, the reader must explore the culture of the ancient Hebrew people, understand the divergence between several Hebrew words and their English translations, and be mindful not to cast their post-enlightenment, materialistic perspective of the world into the reading of the text. Taking this approach to the text will not only render a functional ontological interpretation of Genesis 1 probable, but will also disavow a material ontological understanding of the creation account.

Cosmology

Cosmology, or the understanding of the universe, has drastically changed over the span of recorded history. The currently accepted cosmological model is an expanding universe following the implosion of an original singularity – the Big Bang. Though there are still many unanswered questions about the Big Bang, most of which will likely never be fully elucidated, there has been a steady flow of empirical evidence supporting the theory since the 1960’s. This “creation account“ is representative of how cosmology functions in the 21st century Western World – namely, accumulation of empirical evidence supporting a hypothesis. A relic of the 16th century Scientific Revolution, our empirical worldview in the West has lead to astounding advancements in science, medicine, and technology that has propelled humanity forward at a remarkable rate. However, this empirical approach to the world is so infused with the Western worldview that it is often difficult for one to step outside of this perspective.

We in the Western World tend to see any non-empirical approach to knowledge or truth as primitive and often not worthy of pursuit. This naïve approach becomes problematic when studying the writings of ancient societies that had a different cosmology and cosmogony than our own; the Bible, containing a library of ancient Hebrew and Greek writings, is no exception. The cosmology and cosmography (physical arrangement of the cosmos) of the Biblical authors and their contemporaries differed profoundly from modern day ideology. This difference is expressed in the literature of these times, including the books of the Bible. Though perspicuous to its contemporaries, the intentions of Biblical literature, particularly that of the Old Testament, require deliberation from modern day Christians.

No sensible person today would deny that they are sitting on a spherical planet that orbits a star in the center of a solar system. In fact, nearly everyone today would agree with the fact that our solar system is just one of a countless number of solar systems in the Milky Way galaxy, which is, in turn, but a single galaxy among countless others. This belief is not held by a single ethnic group, religion, or country, but rather is held by humanity writ large. This is our modern cosmography, and it’s difficult to rationally deny given our current understanding. Mankind’s cosmography has undergone several paradigmatic shifts since the Biblical (or Babylonian) cosmography of the Old Testament authors.

The name “Genesis” actually derived from a 3rd century Greek transliteration. The Hebrew name for the first book of the Bible, “Bereshit,” means “In the beginning,” alluding to its opening words. These opening words set up the theme of the chapter, namely, the cosmogony of the Hebrew people. This brings up an important point. It must be understood that the Bible was written for all people of all times, but was not written to all people of all times; the Old Testament was written to the ancient Hebrew people. When a work of literature is written, the author employs imagery and ideas that are familiar and relevant to the intended audience. Genesis, being an origins account, includes cosmology in its narrative. The image below is representative of a typical Babylonian cosmology, of which the ancient Mesopotamian people, including the Hebrews, embraced.

Babylonian_Cosmology

It’s evident that ancient cosmography was very different from modern cosmography. To the ancient Mesopotamians, however, this cosmography made sense. This was how they understood their world. When the author of Genesis speaks of the “firmament,” we cannot translate it as sky, as this is not what it meant to the ancient audience. Firmament, to the ancient Israelites, was a solid structure holding back the waters above. This belief in a firmament and waters above was common among all Babylonian cosmography.

So, is the Bible wrong or untruthful for mentioning a firmament that we now clearly know is not there? I don’t think we can make the claim that the Bible is “wrong” about this if we keep in mind that it was written to a specific audience at a specific point in time. A material ontological reading – what many today mean by “literal,” though this is a misnomer – presents a problem: the Bible is supposedly unwaveringly truthful, and it claims that there is a firmament in the sky, above which lies some sort of ocean. Now what? Do we accept that there is a firmament in the sky and stop paying our half a penny per tax dollar for NASA, or do we investigate a little more? If we accept a material ontological reading of Genesis 1 but do not accept the cosmography of Genesis 1, then we have quite the theological conundrum. If the intention of Genesis 1 was to communicate material ontology, then it would need to be written using an understanding common to all people of all times in order to get the message across while also preventing falsehood from arising within the text. Perhaps, then, the message of Genesis 1 is not material ontology.

Function and Existence

The nature of existence is not something people contemplate on a regular basis. In a modern Western World mentality, the nature of existence is intrinsically tied to biological life. However, “alive” and “existence” are communicating two different ideas. A rock exists, but we would not consider it “alive.” So, what does it mean to exist? In ancient Mesopotamia, material properties were not a sufficient condition for existence; an object or being’s existence was contingent upon function. This reality was true for cultures writ large, including the Israelites. This notion of functional existence was also expressed in creation stories in the ancient Near Eastern world. In essence, creation stories, including those of the Israelites, were stories about the gods giving function and order to a system.

When investigating the idea of existence, a hermeneutical approach must be incorporated in the analysis. One example of this is the Hebrew word “bārā’,” which is translated as create. In the Bible, bārā’ is only used in reference to God. Also, there are a number of instances in the Bible where bārā’ must be understood as functional creation; correspondingly, there seem to be no instances where bārā’ mandates material creation. Exegetical work on bārā’ seems to suggest a functional connotation. It might seem odd at first to have more than one word for an action, but this is common among languages. A language belonging to a culture that relies on the phases of the moon might have a dozen or more words for “moon” depending upon the context in which it is used. Language is a tool that is molded based on what is important to the user. The idea of a word for creation, used in the context of function, follows suit with a functional ontological reading of Genesis 1. Functional creation is not only an ancient notion. Even today there are examples of existence that rely on function.

John Walton gives a clear, modern example of modern functional creation in his book “The Lost World of Genesis One.” Imagine a restaurant. When does a restaurant come into existence? Is it a restaurant when the building is constructed (i.e., the material creation)? A building can be or become anything, so this cannot be the case. The most sensible answer seems to that a restaurant becomes a restaurant after a safety inspector deems the restaurant fit to conduct business. A restaurant that is closed, for one reason or another, is not in “existence.” Business, which is the function of the restaurant, is required for existence; thus, its existence is defined by its function. Naming is also related to function. The name Yahweh can be translated as “I am,” which speaks to the Judeo-Christian understanding of God’s function as an eternal and omnipresent being. Of course, material properties must precede function as a necessary condition for existence, but material properties alone may not be a sufficient condition for existence. Restaurants aren’t the only example of functional ontology today. Many things, including corporations, businesses, stocks, the Internet, and governments require, in one-way or another, functional ontology. It is no stretch of the imagination to envision how a culture, void of modern science and empirical based thinking, could have ontology rooted in function. Consequently, this provides support for the functional worldview of the ancient world, specifically with the Israelites.

Can Genesis be Material and Functional Ontology?

Given the evidence, it seems to follow that Genesis was written as functional ontology. However, this does not necessarily eliminate the possibility of also reading it as material ontology. Many Christians argue that a “literal” reading of Genesis 1 is required if we are to take the Bible seriously. This proves problematic, as everything we know and understand about life, the universe, and much of science in general, is not in accordance with a material ontological reading of Genesis. Many arguments exist to reconcile the discrepancies between science and a material ontological reading of Genesis 1, however all of them rely on some sort of ad hoc modification leading to a concordist view of the Bible. Concordism, in this context, is the belief that Genesis 1 can be read as material ontology and still be in concordance with modern science.

Concordist views come in many forms, including young earth arguments and old earth arguments. For an old earth argument, the most common approach is to place long periods of time either between the “days” of creation or between the first and second creation account. One main problem with this kind of approach is that it ignores what we currently understand about the ancient Israelite culture. It is not that the concordist hypotheses are too far-fetched (though I would argue that they tend to stretch science and hermeneutics quite thin), but rather that they are missing the point of the story. I am not making an attempt to disprove the science in their arguments; I am trying to show that the science does not matter. The arguments do not seem to take into account the fact that the Bible was not written to us; it was written to the Israelites. Science as we know it today did not exist when the books of the Bible were written, therefore it does not make sense for the Bible to be written with science in mind. The efforts to reconcile modern knowledge with a material ontological reading cause the reader to lose sight of the intention of Genesis 1.

A young earth creationist (YEC) would tend to agree with many of my critiques to concordist views. They see the Bible as an absolute truth, and man should not invoke into the text his finite understanding of science and theology. If Genesis 1 speaks of 24-hour days (yom, in Hebrew), then the creation account must have taken place in 6 literal days, they argue. Attempts at stretching the meaning of words such as yom do not enrich the authority or veracity of the text by accommodating modern cosmology. They maintain that, because we have a finite understanding of the world and certainly of God, perhaps the text of Genesis 1 should be taken as a literal account of material origins. This YEC argument seems fair, and it does not have the problem of concordism, but there are some major issues.

By affirming Genesis 1 as material ontology, YEC proponents are, by default, reading their culture into the Bible. The creation account only seems to suggest material ontology to a reader who has the cultural bias of empiricism. Those of us born into the 21st century Western World are encultured to see things in a physical and empirical manner. This becomes a problem when we read a work of literature from a culture that did not have this same mentality. A “literal” reading of Genesis 1 means something different for us today than it would have to the ancient Hebrews. The most “literal” reading of the text would be a reading that comprehends the text through the mind of the author. The best way to attain this understanding is to study the culture and recognize the biases that would be present in the author’s writing. In the case of the ancient Hebrew literature, there would be a cultural bias against physical descriptions. We must take into account the cosmological and epistemological views of a culture when we read the literature. Along with eschewing modern scientific understandings of the world, this absence of culture interpretation is perhaps the biggest failure of YEC theology.

The ancient Israelites were not concerned with the physical details of creation, and a Genesis 1 would not be written as material ontology. The Israelites were concerned with who created them and why mankind is on earth. A functional ontological reading of Genesis 1 answers these questions and clears up most of the modern day cosmogony confusion. When viewed as a functional account of origins, the age of the earth, which tends to be at the heart of many concordist beliefs, is not an issue. There is no longer a need for the Judeo-Christian God to be a charlatan, He does not need to hide in the gaps of knowledge, yom can mean a literal 24-hour day, evolution is no longer a threat, and the universe can be 13.7 billion years old. A functional ontological view allows Genesis 1 to succeed in its intention, namely, communicating to the reader (particularly the original audience of ancient Hebrews) who God is and the nature of his relationship to mankind.

Conclusions

We must be careful not to come to Genesis 1 thinking of it as a modern metaphor just because the language or structure is strange to us today. Metaphor and functional origins are qualitatively different characterizations. There are instances of figurative language within the creation story, but this does not mean the story itself is metaphor nor does it say anything about the meaning of Genesis 1. It is important to understand that this style of writing was the method of conveying truth in the ancient world. Today, we use an empirical method to convey truth; Hebrews did not see truth in this way, and used the meaning of the story to convey a truth about the nature of story’s subject. Many Native American tribes convey truth in a similar fashion. Chronological or historical matters are not of significance. Rather, what matters is what the story says about the subject’s character or its relationship to mankind.

Many instances of odd structuring or bizarre language likely occur because of the vast cultural differences. For example, the ordering of events in the creation account stems largely from the Hebrew use of block logic as opposed to the step logic to which we are accustomed. Similarly, early Hebrew writers emphasized theological points and were more concerned with the significance of events than they were with historical linearity. Historicity in Genesis would not have been in issue to the Israelites. Genesis was told as a story of functional ontology, expressing the importance of mankind’s place in relationship with the Creator. These differences in perspective and writing style do not make the story metaphorical or untrue, they simply express perspectives of the ancient Hebrew people.

Given our cultural disposition to seek empiricism, we must take careful deliberation when making assumptions about the meaning of Scripture. The reader must accept that the text was not written with their culture, including notions about the nature of truth, in mind. The text was written in a manner that reflects the culture of the time, thus the culture must be “translated” alongside the text. The ancient Hebrew audience would have understood the message that was being communicated through the creation story. It was written as a testament to their God’s power and glory. It enlightens the reader on who their God is and where mankind is in relation to him; mankind is on earth as the image-bearer of the divine. The mentioning of physical objects in Genesis 1 is to give the story context within the ancient Israelite culture, in much the same way that objects are employed to this end by modern authors. Reading Genesis 1 as an account of material origins is simply missing the point. In turn, it causes the text say something that was never meant to be communicated, and flies in the face of our current understanding of nature and cosmology. Christians today must approach Genesis 1 not as material ontology that their modern sociocultural context has shaped them to see, but as a functional ontology that reflects the views of its original audience.

For more reading on this subject, check out the following from John Walton:

Ancient Near Eastern Thought and the Old Testament: Introducing the Conceptual World of the Hebrew Bible

The Lost World of Genesis One: Ancient Cosmology and the Origins Debate.

De-Extinction Is On Its Way

“What’s the point of bringing back some pigeons that have been gone for a century, or some hairy elephants that disappeared four millennia ago? Well, what’s the point of protecting unhairy elephants in Africa or over-specialized pandas in China or dangerous polar bears in the Arctic, or any of the endangered species we spend so much money and angst on preserving?”

– Stewart Brand

It’s difficult to argue with that logic. In 2012, the US spent over $3 billion on conservation efforts.

I don’t know about you, but I always dreamt of a real-life Jurassic Park. Unfortunately, it doesn’t seem like dinosaurs will ever have the chance to roam the Earth again. Quite frankly, with new research showing that most dinosaurs probably had feathers, I’m not sure it would even live up to what our minds are conditioned to believe dinosaurs to look like anyway. They’d be giant, carnivorous chickens, more or less. But what about a mammoth or a thylacine?

While the DNA that once inhabited a dinosaur bone is long gone, victim to over 65 million years of radiation, hydrolysis, and other forms of degradation, DNA can be found in some more recent specimens. But how would it work? How could we possibly bring back – that is, De-Extinct – an organism. Well, actually, it’s already been done.

The Sad Saga of the Pyrenean Ibex 

The last surviving Pyrenean Ibex died in 2000. Of all the ways for a species to go out, this one was found dead underneath a fallen tree. It seems as though Mother Nature was just out to get them. So, naturally, humans did what humans do best – try to one-up Mother Nature. Pre-emptively thinking in 1999, biologists cryogenically froze a tissue sample of Celia, the last surviving member of her species. When Celia died, scientists were ready to bring her back.

The technique used is called somatic cell nuclear transfer. You can find a short video of it happening in real time here. Essentially, an oocyte – egg cell – from a domestic goat was de-nucleated and the nucleus from one of Celia’s somatic (body) cells was inserted into the empty oocyte. The resulting cells were then transferred into a domestic goat surrogate. Unfortunately, the process proved technically difficult. 285 embryos were reconstructed. Of those, 54 were transferred to 12 ibex and ibex-goat hybrids. Only two survived the two months of gestation before they too died. One clone was finally birthed in 2009 – the very first de-extinction. Unfortunately, the clone had a lung defect, and died of a collapsed lung only 7 minutes after birth. One of the problems was likely the fact that Celia was already 13 years old – old age for a goat – when the tissue sample was taken. This means that her telomeres, the caps on chromosomes that protect the supercoiled DNA, were already very short. As DNA replicates, the enzymes cannot make it to the very end of the DNA (where the telomeres are located), so the telomeres are truncated. They act as a sort of buffering system to keep the actual genes from being damaged (on a side note, your age is essentially a function of your telomere length).

The procedure seemed to doom any idea of de-extinction. After all, if we can’t even bring back a species that has been dead for under a decade, how can we ever hope to bring back a 30,000 year-old wooly mammoth? Fortunately, scientists are incredibly stubborn, and didn’t just drop the idea all at once. With advances in technology, science fiction often becomes reality. In the field of de-extinction, the limiting factor is DNA extraction and sequencing technology, which seems to be growing faster than Moore’s Law predicts it should.

A New Method

So, is there another way – a better way – to clone an animal than by somatic cell nuclear transfer? Maybe, and it’s called induced-pluripotent stem cell (IPS)-Derived sperm and egg cloning. The idea behind this is to splice your target species DNA (say, from a mammoth) into a surrogate stem cell genome (say, from an Asian elephant). Because these are stem cells (or pluripotent cells), they can become anything. So you coax the newly modified stem cells into becoming germ cells – those that will make the testes and ovaries. You then insert the germ cells into the embryos of a male and female surrogate (Asian elephants, in our example). Now you have a male and female Asian elephant embryo with mammoth precursory germ cells. You grow up the two surrogates, and they will exhibit target species (mammoth) gonads (testes and ovaries). So, you then mate the two and out comes a “full-blood” mammoth (click here and skip ahead to about the 10 minute mark to see this example with falcons and chickens. I recommend watching the entire TED talk. It’s my favorite one, and will explain a lot about De-Extinction).

You will see a second De-Extinction in your lifetime, and hopefully more to follow. Expect it from – Passenger Pigeons, Gastric Brooding Frogs, and, hopefully, Mammoths.

Maybe We Can… But Should We?

This, to me, is one of the biggest hurdles. You have to convince people that something, at least of this caliber, is a good idea. I began the post with a quote from Stewart Brand that I think idealizes the argument for De-Extinction. Hank Greely, a Stanford Law School professor specializing in biomedical technology ethics gives an excellent TED talk on this (found here). To outline his talk, here are the 10 things we must consider, 5 risks and 5 benefits:

  • Animal Welfare
  • Health
  • Environment
  • Political Concerns
  • Morality
  • Scientific Knowledge
  • Technological Progress
  • Environment
  • Justice
  • Wonder

I will flesh these out quickly, but won’t spoil the TED talk.

Animal Welfare

  • Cloning isn’t a very “safe” process. It can take hundreds of embryos, and often the few who survive don’t last long. We need to ensure the welfare of the animals that we try to bring back.

Health

  • What if we bring back an animal and it happens to be a great vector for a terrible disease? Oftentimes the beginning of an epidemic is a new, better vector.

Environment

  • If we bring back a species, is it going to cause ecological problems?

Political Concerns

  • If we make De-Extinction a plausible conservation effort, will it undermine current efforts to preserve what we have? Why try to save them if we can just bring them back? Similarly, is it worth it financially?

Morality

  • To be short, we are playing God. We are doing something that, presumably, has never really happened in almost 4 billion years of life. We are redrawing the branches of the tree of life. It’s not something to be taken lightly

Scientific Knowledge

  • We could learn things previously unknowable about genetics, evolution, and biology.

Technological Progress

  • De-Extinction is the edge of science. It is pushing technology to its outer bounds, making technological development increase faster than it normally would. This provides technological offshoots for many medical procedures.

Environment

  • Bringing back a species can actually be good for the community. See, for example, the effect of wolf reintroduction at Yellowstone.

Justice

  • Are reparations due? Its arguable whether or not we caused megafaunal extinction – mammoths, wooly rhinoceros, cave bear, etc. – but there’s no doubt that some species, such as the passenger pigeon, went extinct due to human activity, namely hunting. And, sadly, we continue this destructive path, which is stripping the Earth of some of its most precious large mammals – tigers, elephants, and rhinoceroses, just to name a few.

Wonder

  • My favorite. This is what science does. It inspires us. It awes us. It brings our imagination outside of our minds and places in front of us. Wonder isn’t all that impractical either. Wonder is what drives scientific knowledge further. It’s a self-perpetuating field that is snowballing into the ever-decreasing realm of science fiction.

The “can we” of De-Extinction is coming to a close. It’s time to start discussing the “should we” aspect. The technology will be here very soon, but are we ready?

Hobby Lobby Case – A Rebuttal. Or, If You’d Like, Setting Things Straight.

I’ve been seeing a lot of pieces written about the Hobby Lobby ruling on emergency contraceptives, by both proponents and opponents. One particular one I came across (found here )seems to think that any opposition to the ruling is simply illogical. Ironically, the whole post is illogical. Let’s take a look at the author’s two premises:

First, the author claims that terminology regarding the beginning of pregnancy is, “nothing but semantics (read: poppycock, malarkey, rubbish, hooey, baloney, bunk, drivel, BS).”

Uhh…. not exactly. He also makes it seem like the beginning of pregnancy is controversial among the medical community when, really, it’s not. The author cites a Reuters article about a study that found that 57/100 doctors said pregnancy begins at conception while 28/100 said it begins at implantation. Smoking gun, right? WRONG. The problem here lies in differing definitions between the lay public and the scientific/medical community. Definitions in science often mean something different than they do for the lay public, leading to confusion. Examples of this can be seen with “theory” or “law” or, indeed, “conception.” From a scientific standpoint, conception is does not necessarily mean “at fertilization.” It can also mean at implantation. This makes sense given that the medical definition of pregnancy, according to both the American Medical Association and the British Medical Association, indicates that it begins, or concepts, at implantation of the zygote. So asking a doctor whether pregnancy begins at “implantation” or “conception” is a null question. To a medical doctor, those two words are interchangeable. The Reuters article even mentions that this is a weakness in the study (along with the fact that only 100 doctors were surveyed). The author either didn’t actually read the Reuters article or intentionally left that bit out. 

He also asks, regarding Plan B, “Does it kill the blastocyst (the little clump of great-grandbaby cells from the fertilized zygote)? Absolutely. And that’s what Hobby Lobby objects to.” He then goes on to say that abortion isn’t the proper term, but rather “murder, manslaughter, butchery, carnage, homicide, infanticide, massacre, extermination, slaughter or annihilation.” Basically, he’s just playing on emotionally charged words to sway an uninformed audience. That’s a pretty typical tactic when you have no real argument. The fact of the matter is that Hobby Lobby won’t cover Plan B as well as IUDs, the latter of which happens to be one of the most effective forms of contraception.

Implying that the pre-implanted blastocyst is a “person” and preventing implantation is “murder” is like saying bricks are a hospital and that not using the brick is malicious destruction of a hospital. It’s just ridiculous, and stems from a misunderstanding of science, particularly developmental biology.

So, unfortunately, the author has made no point at all. He basically said what Hobby Lobby said; “We think X does this, regardless of what the medical/scientific community says, so we shouldn’t have to do it.”Man, I wish that worked for me. “Officer, I define speeding at going at least 15mph over. It doesn’t matter what the legal system defines speeding as, because I think it means at least 15mph over. So I shouldn’t have to pay a ticket.”

On to point 2:

Essentially, the author uses a myriad of strange metaphors and emotionally charged wording to say that Hobby Lobby doesn’t have to provide emergency contraceptives (of which, IUDs are not even classified, so it doesn’t even concern them) because they don’t provide “free food, water, gas, or clothes” either. Sorry, but last time I checked, the healthcare laws didn’t require an employer to provide those. The statement is just a bunch of Red Herrings used to make a point that they don’t support. The laws do, however, require companies with > 50 full time employees to provide health insurance, including contraceptive coverage. He continues on talking about how Hobby Lobby pays its employees great and yadda yadda yadda and gives some inappropriate metaphor about a law requiring assault rifles to be given to employees. All of it is really pretty irrelevant. Basically,a ll that matters is that Hobby Lobby is required by law to provide health insurance, including contraceptives. Their only argument is that they think, or believe, we will put it, that pregnancy begins when sperm meets egg, which flies in the face of the medical and scientific communities’ definitions. Ultimately, what you believe something does should not trump what the experts define it as. Especially when it affects 28,000 people. Your belief should not trump the science, sorry. 

The Paleo Diet – Brilliantly Simple, or Simply Wrong?

Introduction to the Paleo

 According to thepaleodiet.com, “the Paleo Diet, the world’s healthiest diet, is based upon the fundamental concept that the optimal diet is the one to which we are genetically adapted.” Who can disagree with that? After all, it does make sense that the best diet would be one that, according to our genetics, our body can utilize most efficiently. However, is this what the Paleo Diet actually offers?

The Paleo Diet claims to offer “modern foods that mimic the food groups of our pre-agricultural, hunter-gatherer ancestors.” First we have to look at what the Paleo Diet means by our “ancestors.” Being a “paleo” diet, it is referring to our ancestors in the Paleolithic era, which extends from about 2.5 million years ago to about 10,000 years ago, just after the end of the last ice age and around the dawn of the Neolithic – or agricultural – revolution. 2.5 million years is a pretty broad range to select a diet from, but perhaps not so broad on an evolutionary timescale.

One issue that arises when studying the diets of ancient hominids is the fact that archaeological sites aren’t all too common past 10,000 years ago. The reason probably lies in the fact that prior to the Neolithic revolution, people were hunter-gatherers. They didn’t really have permanent settlements. Hunter-gatherers travel to where the food – presumably that which can be hunted (migratory animals such as elk, bison, caribou, etc. depending upon geographic location) and gathered (berries, nuts, shellfish, and so on) – is. This would vary by the season and even by the century as animals permanently migrated to new locations or became over-hunted in their current location. However, when mankind developed agriculture about 10,000 years ago, people began to establish permanent settlements. These settlements, which were fueled by the domestication of plants and animals and thus liberation from hunting and gathering, provide a rich source for archaeological artifacts. It’s difficult to find the few material bits and pieces of a nomadic lifestyle. When people settle for hundreds or even thousands of years in a location, artifacts build up, and the chances of finding something 10 millennia later are much greater.

How do we know about their diet? Archaeological evidence

So, how do we know what the hunter-gatherers ate? One way is to look through the archaeological sites that we do have. Animal bones are often signs that the inhabitants ate meat. Furthermore, we might find tools that could have been used for butchering along with cut marks on the bones that imply that the animal was butchered. Along with this, we can track morphological changes over time. Changes in the size and structure of certain bones, such as the mandible and cranium, might indicate a change in diet. A diet heavier in meat could require a larger mandible and would imply an increase in calories that would be necessary to support a larger brain in the larger cranium.

Osteological analysis, though, is qualitative at best. It’s important to remember that an archaeological site is merely a snapshot in time. For example, a site that was abandoned in the winter (maybe to move somewhere warmer, a death of the inhabitants, or something completely different) might show a heavy use of meat due to the fact that not many plants grow in the colder months. With so few sites, there isn’t very strong evidence one way or the other about diets. Small sample sizes can be incredibly biased.

Stable Isotopes

Another way is to study ancient diets is by using stable isotope analysis. If you remember from chemistry class, isotopes are two elements with the same number of protons but a differing number of neutrons. Because proton (atomic) number defines elemental properties, the two elements are actually the same element, but with slightly different weights. For example, about 99% of the carbon in the atmosphere is C12 – carbon with an atomic mass number (combined number of protons and neutrons) of 12. This is the most stable form of carbon, and thus the most abundant. Carbon has two other isotopes that are relevant to scientific studies, C13 and C14. Though there are many more isotopes, they are found in minute amounts and are so unstable that they decay rather quickly.

You have probably heard of carbon dating, which measures the relative abundance of C14 in an organic artifact and derives an approximate date based on known rates of decay for C14. This works based on the fact that there is a certain ratio of C12 to C14 in the atmosphere, which is taken up by organisms. After the organism dies, C14 begins to decay due to its heavier weight. While this is based on the assumption that C14 to C12 ratios were the same in the past, it can often be cross-verified with other forms of dating, such as stratification, phylogenetic dating, other forms of radiometric dating, and sometimes even early writings (for example, the date derived from carbon dating an item purportedly from some event can be compared to a written, dated historical document describing the event).

Stable isotope analysis works, as the name implies, by measuring a stable, rather than radioactive isotope. Because C13 is not heavy enough to decay (C12 and C13 are the only stable isotopes of carbon, and C14 is the most stable radioactive isotope), it will remain in the bones and teeth in the same C12:C13 ratio as when the organism was alive. Great! Although C12 and C13 are not discriminated in our bodies, some plants distinguish between C12 and C13, ever so slightly. Ribulose-1,5-biphosphate carboxylase/oxygenase – commonly known as RuBisCO – is an enzyme that, in most plants, binds to the CO2 entering the stoma. Rubisco happens to have a slight affinity for C12, meaning the plant – and everything that eats the plant – has a disproportionate amount of C12 to C13. These plants are known as “C3” pathway plants.

In arid climates, where water is even most precious, plants had to adapt. A problem arose due to the fact that water escapes from the stoma when it opens to have rubisco capture CO2. Therefore, some plants, known as C4 pathway plants, evolved to use another enzyme, PEP-carboxylase, to bind CO2. PEP-carboxylase binds much more strongly to CO2 than rubisco, and doesn’t present a preference for either C12 or C13.

Carbon isotopes are used in conjunction with other elemental isotopes, such as nitrogen, to assess relative ratios of plant to meat in diets. This is all based on small differences between heterotrophs and autotrophs, carnivores, herbivores, and omnivores. For example, organisms higher in the food chain tend to have more N15 than organisms lower in the food chain. It is important to understand the isotopic variation of the ecosystem, however, they can vary, especially when environmental manipulation (such as cooking) comes into play. Ultimately, stable isotope analysis has a modest amount of discriminatory power, but is not comprehensive. It utilizes quantitation to make a qualitative claim, and does so on a limited number of samples.

Problems with the logic of a Paleo Diet

Which “paleo” should we eat like? 10,000 B.C.E. Inuit people? 200,000 year old Mitochondrial Eve? 1 million year old Homo erectus? Clearly there were times, and species, of hominids that ate more meat than others. An Inuit living in north Canada survived largely off of seal fat. However, Homo erectus probably lived more off of fruits and nuts. Humans survived and came to dominate the planet due largely to their adaptability, including our omnivorous diet. Our ability to adapt to mostly nuts or mostly blubber has granted us freedom to roam from the heart of Africa to the frozen lakes of Canada. Paleolithic hunter-gatherers simply ate what was available to them.

Many Paleo dieters cite articles discussing health disparities that arose when agriculture entered the picture. While this is true, it’s not necessarily because we stopped eating a “paleo diet.” More likely, health problems arose because we stopped eating such a wide variety of foods. Many ancient peoples went from elk, bison, nuts, and berries to what we could domesticate. Eventually, our domesticated crops and animals grew in variety and things leveled out a little more. This was likely not a rapid transition. Domestication may have started out as simply a way to supplement hunting and gathering before the boom of the Neolithic Revolution. Regardless of your diet, it is important to eat a variety of food in order to encompass all nutritional ingredients. Many people in Westernized cultures today eat a much more monotonous diet than they should.

Are we genetically identical to our “Paleo” brothers and sisters?

One of the main arguments of the Paleo Diet is that our genome has changed little since the end of the Paleolithic period, meaning our bodies are still best adapted to the diet of that time. This argument is a bit short-sighted. To claim that our genome has not adapted to our Neolithic lifestyle is simply incorrect. It is true that our genome evolution lags far behind our cultural evolution, and is often overshadowed by it. However, there do exist some key differences in our genomes from those of a Paleolithic hominid. The two most well known adaptations are the amylase and lactase mutations. Amylase is an enzyme that allows for digestion of starch from grain. As the Neolithic Revolution kicked into gear, those with an extra copy of the amylase gene better metabolized all of the new grain they could grow. This extra gene places amylase in the saliva, helping break down the starch at the beginning of digestion rather than beginning halfway through in the gut.

The second mutation is a regulatory mutation. People are born with a gene that regulates the production of lactase, an enzyme that breaks down the biologically unusable dairy sugar lactose into the biologically usable sugars galactose and glucose. Before animal husbandry practices of the Neolithic Revolution, the lactase gene would be transcriptionally inactive, or “turned off” in most people around the age of 5-7. After this age, the child no longer breast fed, and really had no need for lactase. However, once people began raising dairy animals, such as goats and cattle, dairy products such as milk and yogurt became an important staple food. This seems to have caused positive selection for the genetic mutation that allowed the lactase gene to remain “on” throughout life. Those with the lactase and amylase mutations could better exploit dairy and grain products than those without the mutations. So, while our genomes are not radically different, they are indeed different, and have adapted to some of the Neolithic diet changes.

Microbiomes

Although our genome is relatively similar to our ancestors, our microbiome certainly isn’t. The microbiome is the summation of microorganisms that inhabit us. This might not seem like a big deal, so let me put it in perspective. If we were to take the entire amount of DNA in your body, including that of the microorganisms, human DNA would comprise only about 10%. The other 90%? That would be the microbiome. You are 90% microorganism. With the recent completion of the human microbiome project, expect to see some incredible discoveries about the differences between ourselves and our Paleo ancestors in the near future.

So how do we study the Paleo microbiome? One way is through ancient DNA. Unfortunately (or fortunately, for researchers today), there were no Paleo dentists around, nor were there any Paleo toothbrushes. When people ate, plaque built up and calcified on their teeth. This calcified plaque is called dental calculus, and it preserves the DNA of the microorganisms that made up the plaque along with some of the DNA from the actual food. From this, using Next Generation Sequencing techniques, we can learn more about the kinds of food and the microorganisms that were present in the bodies of our ancestors. By comparing what we find to oral microbiomes today, we can have a better understanding of what Paleo people ate. Also, microfossils can be preserved in the dental calculus, allowing for a visual confirmation of food in the plaque. Again, these are qualitative measures that are inhibited by sample size. But, these are the best methods we have and they are producing some excellent research.

Is the food still the same?

People freak out about GMOs. The truth is basically everything we eat – meats and plants alike – are genetically modified. Over thousands of years we have artificially selected plants and animals for particular traits. As our genome has changed since Paleolithic times, plant and animal genomes have radically changed, largely due to human manipulation. So, even if you eat according to the Paleo Diet, you are eating the modern-Paleo Diet, not the Paleo-Paleo Diet. So, really, you aren’t even eating like you think the ancestors ate. Our modern plants are “human inventions,” as Dr. Christina Warinner – a leading Dental calculus expert at the University of Oklahoma – puts it.

Ultimately, the Paleo Diet, as it is marketed, isn’t really a Paleo Diet at all. There’s no harm, and definitely some benefit, in cutting refined sugars and overly processed meats out of your diet. However, eating modern versions of nuts, fruits, veggies, and more meat isn’t going to make you any more like a Paleo-man or Paleo-woman than if you just eat a normal, balanced diet. If anything, skipping out on legumes, dairy, and multi-grain wheat, which are prohibited in the Paleo Diet, could cause a lapse of certain nutrients. Technological and agricultural advances have produced some amazing foods that our Paleo ancestors could have only dreamt about. If you really want to be Paleo, then take advantage of the advances in food science. It’s what our ancestors would have done.

Biocentrism – An Alternative “Theory of Everything.”

For a long time, physicists have dreamt of a unifying “Theory of Everything” that would amalgamate every physical aspect of the universe into one packaged theorem. As of now, physics hangs in the balance between Einstein’s Theory of General Relativity (GR), which does a pretty great job of explaining relationships between macrocosmic entities, and Quantum Field Theory (QFT), which does an excellent job of showing that GR is wrong on the microcosmic scale, but we aren’t sure why. Both have tremendous explanatory power (though nobody really knows what QFT is actually saying), but, unfortunately, are incompatible cosmologies. Subatomic particles, explained by QFT, simply don’t fit the laws of GR. God may not “play with dice,” as Einstein put it, but apparently he does roll subatomic dice. Truly, QFT embodies Aristotle’s maxim, “the more you know, the more you know you don’t know.” More recently, physics has also devised String Theory, of which various versions can be incorporated into a multi-dimensioned theory known as “M-Theory.” M-Theory also has incredible explanatory power, accounting for all of the fundamental forces and types of matter. It’s a great hypothetical framework, but lacks a practical aspect that is necessary in any strong scientific theory, making it about as believable as any other cosmological mythology. While these “Big 3” theories are all contained within the realm of physics, Robert Lanza claims there is a 4th, more appropriate explanation. And it lies within the realm of biology.

Robert Lanza is a leading stem cell researcher and Chief Scientific Officer at Advanced Cell Technology. He is one of the world’s most respected biologists, having been mentored by giants in a variety of scientific disciplines, including Jonas Salk (developed the first Polio vaccine), B.F. Skinner (famous psychologist and behaviorist), and Christiaan Barnard (performed the 1st heart transplant). In other words, Lanza has a lot to lose, and likely wouldn’t tarnish his reputation on something he didn’t deem worthwhile.

In his book, “Biocentrism,” Lanza offers a cosmology situated within the field of biology, specifically within consciousness. Regarding Biocentrism, Lanza notes 7 principles. I will list them all and then take a closer look at each one:

  • What we perceive as reality is a process that involves our consciousness. An “external” reality, if it existed, would – by definition – have to exist in space. But this is meaningless because space and time are also not absolute realities but rather tools of the human and animal mind.
  • Our external and internal perceptions are inextricably intertwined. They are different sides of the same coin and cannot be divorced from one another.
  • The behavior of subatomic particles – indeed all particles and objects – is inextricably linked to the presence of an observer. Without the presence of a conscious observer, they at best exist in an undetermined state of probability waves.
  • Without consciousness, “matter” dwells in an undetermined state of probability. Any universe that could have preceded consciousness only existed in a probability state.
  • The structure of the universe is explainable only through biocentrism. The universe is fine-tuned for life, which makes perfect sense as life creates the universe, not the other way around. The “universe” is simply the complete spatio-temporal logic of the self.
  • Time does not have any real existence outside of the animal-sense perception. It is the process by which we perceive changes in the universe.
  • Space, like time, is not an object or a thing, Space is another form of our animal understanding and does not have an independent reality. We carry space and time around with us like turtles with shells. Thus, there is no absolute self-existing matrix in which physical events occur independent of life.

That’s a lot to sort out. Let’s start with the first principle:

  • What we perceive as reality is a process that involves our consciousness. An “external” reality, if it existed, would – by definition – have to exist in space. But this is meaningless because space and time are also not absolute realities but rather tools of the human and animal mind.

The second half of this tenet is incorporated into the 6th and 7th, so I will just take a look at the first half. Yes, what we perceive as reality is a process that involves out consciousness, regardless of the sense that is used to perceive. And, yes, an external reality would need to exist in some sort of space, as it is external from our own perceptive machine, i.e., the brain. Our senses indeed mean nothing without consciousness. Similarly, you can perceive things that are not there, or misperceive a sound for, say, a color. For more information on this, look up synesthesia.

Now for the second:

  • Our external and internal perceptions are inextricably intertwined. They are different sides of the same coin and cannot be divorced from one another.

Again, I don’t see a problem with this statement. What you “see,” “touch,” “smell,” “taste,” or “hear” are all meaningless without interpretation from the brain, or consciousness. If I think about the number 4, the conscious process is not so different from when I see the number 4 and my brain interprets the meaning of the symbol.

So far so good, what about 3?

  • The behavior of subatomic particles – indeed all particles and objects – is inextricably linked to the presence of an observer. Without the presence of a conscious observer, they at best exist in an undetermined state of probability waves.

This one assumes that the physical instantiation of the whole is a sum of its parts. On a basic level, this makes as much sense as anything else. If the subatomic particles exist as waves when unobserved (see the “double slit experiment” for details), then so should the things that they compose. There can be some points of contention for this generality. For example, sodium (Na) and Chlorine (Cl) are both pretty dangerous to humans in elemental form. However, when they come together, they create sodium chloride, or table salt, the main ingredient in those delicious french fries. Perhaps subatomic particles have some yet undetermined attribute that causes their fundamental composition to change when combined together. This, however, seems unlikely. Then again, it’s quantum mechanics. Everything in quantum mechanics seems unlikely. Ultimately, this principle passes on the grounds of simple logic, but could be troublesome due to misunderstood properties of subatomic particles.

The fourth is linked to the 3rd:

  • Without consciousness, “matter” dwells in an undetermined state of probability. Any universe that could have preceded consciousness only existed in a probability state.

This principle follows logically from the third. Namely, matter, composed of subatomic particles, seems to exist as a wave until observed (i.e., perceived consciously). Thus, a pre-conscious universe would exist as a wave, suggesting it exists only as a probability.

The 5th is perhaps the shakiest principle:

  • The structure of the universe is explainable only through biocentrism. The universe is fine-tuned for life, which makes perfect sense as life creates the universe, not the other way around. The “universe” is simply the complete spatio-temporal logic of the self.

Lanza is jumping the gun a little here. Yes, it does seem that life (or consciousness) “creates” the universe, as the universe, or at least the subatomic particles that compose it, do not exist as a particle, but rather as a wave in the absence of consciousness (or a conscious observer). As such, how we make sense of the things we perceive using space and time – the spatio-temporal logic of the self – is essentially the “universe.” The claim is bold, but not completely out in left field.

The 6th explains part of the 5th:

  • Time does not have any real existence outside of the animal-sense perception. It is the process by which we perceive changes in the universe.

For those unacquainted with physics or neuroscience, this seems radical. In fact, it even seems a little radical for those that are in the fields. However, it seems to be true. Time is a tool. Our brains are wired for connecting the dots. In order to connect the dots, we need a connector. This connector is time. Think of time as not so different from measuring length, weight, or any other attribute. If you can’t imagine how time might not exist, try imagining a world in which we cannot measure length. Yes, length exists, but need a ruler to compare them. As for events, they exist, and we use time to compare them. If this still does not make sense, do some independent research on the topic. It’s difficult to explain, and I’m certainly not the most qualified individual to do the explaining. However, it doesn’t defy anything really, so this one can be accepted as well.

And now to wrap things up with the 7th principle:

  • Space, like time, is not an object or a thing, Space is another form of our animal understanding and does not have an independent reality. We carry space and time around with us like turtles with shells. Thus, there is no absolute self-existing matrix in which physical events occur independent of life.

Again, space is how we compare our perceptions. It’s another “measuring stick” for what we perceive, much like time. In fact, quantumly entangled particles seem to defy space and time. They react instantaneously – faster than the speed of light. The faster you move, the smaller space becomes and the less time affects you. So space and time can both be changed depending upon the circumstance. This, taken with the fact that our consciousness is what interprets, and thus “creates” the universe, are Lanza’s main points for a non-existent physical universe. Again, this is difficult to comprehend, and I’m sure I muddy the picture more so than others. But, look into it and you might understand it more clearly.

So, those are the 7 principles of Biocentrism. On the surface, they seem to make sense. At least, as much sense as a physically oriented view of the universe. They’re just incredibly strange and require a complete paradigmatic shift in order to comprehend. The biggest thing that stuck out to me while reading Lanza’s book and claims of Biocentrism was the inception of the universe. According to Lanza, and perhaps other proponents of the Participatory Anthropic Principle (PAP), the universe existed in an indeterminate state before consciousness. Once consciousness arrived, the universe could be observed and thus materialize. However, wouldn’t the consciousness, embodied by a conscious being, only be a probability unless observed? It seems paradoxical.

Now, I don’t necessarily ascribe to biocentrism. I do think it explains a lot in a very cool fashion, but it lacks the same practical testability and thus falsifiability that M-Theory lacks. Perhaps, as we begin to further understand QFT (or understand it at all) we will be able to better manipulate and experiment upon subatomic particles, thus providing evidence either for or against M-Theory and Biocentrism. Until then, these two cosmologies are too theoretical to act as a standing “Theory of Everything.” At any rate, I would definitely recommend the book. It’s an excellent, thought-provoking read that will challenge the way you see the world. You have to approach it with an open mind. A background in science wouldn’t hurt, but Lanza does a pretty good job of explaining concepts. I’ll end with a quote, again from Aristotle:

“It is the mark of an educated mind to be able to entertain a thought without accepting it.”

Keep that in mind if you read the book.

Ancient… DNA?

When I tell people that I want to study ancient DNA, I am usually met with a perplexed look, followed by, “Ancient…DNA?” Yes, ancient DNA, and it is exactly what it sounds like. DNA, the double stranded phosphate chain of adenines, thymines, guanines, and cytosines, is found in every cell of every living thing. As you learned in school, it’s the stuff that makes us, us. The language of life, as it may be called. It provides instructions for every part of your being. So, what happens when you stop… being?

While you are alive, your body is doing constant maintenance on everything, including your DNA. Oxidation, hydrolysis, UV irradiation, and other mutations wreak havoc on your DNA, only to (hopefully) be taken care of by internal repair mechanisms. When you die, these repair mechanisms no longer function, leaving you vulnerable not only to those mutations, but also a whole new onslaught by microorganisms. Your DNA begins to break down, with nucleotides inappropriately bonding to one another (e.g., thymine dimers), transforming into entirely different nucleotides (e.g., cytosine à uracil), being excised via hydrolysis, and the once strong phosphate backbone succumbing to harsh UV irradiation from the Sun. However, the rate at which all of this happens is dependent upon the environment. Less water means less hydrolysis, less sunlight means less UV irradiation, and freezing temperatures slow decay. Although we are discovering that, under ideal conditions, DNA can remain for a very long time, 65 million years is simply too far out of the scope. Sorry, no Jurassic Park (I know, I’m bummed too). Stay tuned, however, for Pleistocene Park…

The field of ancient DNA is fairly new, beginning in the mid 1980’s with museum specimens. Since then, it has piggybacked on the technological rise of the Polymerase Chain Reaction (PCR) and even more recent advances in the field of “Next Generation Sequencing.” Both technologies allow you to take a few small copies of DNA and create billions of copies within a few hours. It’s truly remarkable. So remarkable that Kary Mullis, the inventor of PCR, won a Nobel Prize only a decade after his invention, rather than the norm of 30-40 years after the fact. DNA sequencing technology is progressing faster than anyone can keep up, which is good news for not only ancient DNA researchers, but biomedical researchers of any kind.

This is my first blog post, so I’ll keep it short, offering only an introduction into the kinds of things I’ll be posting about. Ancient DNA, as of now, is the closest thing we have to a time machine. Reading the language of life for long, long dead organisms allows us a window into the prehistoric world. It allows us to better understand the physiology of and our relationship to animals that belonged to a world that no modern human has ever witnessed. And, if the advances keep up the pace, the field of ancient DNA may be able to not only offer a window into the past, but a door through which the past can step into the present. As is common in science, what was once science fiction is quickly becoming reality. De-extinction is in the not too distant future.

Evolution, Genetics, Anthropology

Follow

Get every new post delivered to your Inbox.

Join 45 other followers