Blog

  • The Power of Negative Thinking

    The Power of Negative Thinking

    In spite of the multiple posts I have made expressing reservations about social media, it should come as a surprise to no one that I do waste a fair amount of time noodling around on Facebook. Part of it is sheer laziness – I come home from work, eat, and just want to unplug my brain for a while, and the internet makes it easy – too easy. But part if it is also fascination with what other people decide to post. I am just as guilty as the next person of posting my exercise habits, what I eat, where I’ve been, etc. I’m also guilty of sharing articles, political opinions, stuff I think is funny, and things I find compelling or inspirational. So I will refrain from pointing the finger at others who do the same thing, but with whose views I may disagree.

    That disclaimer aside, I’m going to rant about a type of post that I sometimes find particularly galling: the “inspirational” post. These are the pictures/quotes that say sunny things about keeping a positive attitude, being thankful for each day, appreciating friends and family, not letting negative situations get you down, etc. This is all well and good, but I am finding that an excess of platitudes tends to drain them of any impact. I also find that these little bon mots seem to dumb down or gloss over the reality of complex emotions and situations. In particular, I often see posts about how you are in control of how you feel in any given situation. That is, if you feel bad about something, it’s because you are allowing yourself to feel bad. To a large degree, I think this is bullshit. Self-blame does not help in a negative situation. It’s bad enough that something is bringing you down; do you have to also point the finger at yourself and feel even worse because you are “allowing” yourself to feel negative emotions? This is part and parcel of the same bullshit idea behind books like The Secret, which trick you into buying them with New Age wishful thinking crapola about how if you just believe something, it will happen.

    I’m not saying that positive thinking isn’t a powerful tool in a person’s emotional arsenal. What I think is important is to acknowledge that we aren’t always in control. People do bad things to each other, and it hurts. We make bad decisions that lead to painful emotions or situations, and it hurts. We judge and ridicule and put others down, and yes, it hurts – them and us. But it’s too simple, too general, to say that when we criticize someone or something, we are secretly, unconsciously, criticizing something in ourselves. It’s too simple to say that we can choose not to feel pain when something external to us – something outside our control – makes us feel bad. In fact, I argue that it’s not just okay, it’s essential to allow ourselves to feel negative emotions. Stuffing them down, not genuinely feeling those negative feelings of pain, sadness, judgement, helplessness, loss, embarrassment, humiliation – that does not help, and I think it makes things worse when the platitudes tell us we shouldn’t feel those feelings.

    Negative emotions, feelings, experiences exist. We should acknowledge them. We should try to mitigate them and deal with them as best we can. I don’t think we should wallow or allow the negative thoughts to be an excuse for treating other people poorly. There is room for learning and introspection in every situation, and some people learn faster than others. Just don’t tell yourself that you shouldn’t feel those negative feelings. Heed the platitudes, but recognize them for what they are: feeble, if well-intentioned, attempts to apply general solutions to specific and often complex situations.

  • Technology and Its Discontents: Mainly Mozart

    Technology and Its Discontents: Mainly Mozart

    On Friday, January 11, 2013, I went to the opening of the Mainly Mozart spotlight series at the invitation of a work contact. I went both because I was interested in the opportunity to see world-class musicians perform classical pieces, and also because of the networking opportunity for my job. There was a wine reception before the concert, and I was excited to go and meet new people through my contact, who is a prominent person in the community.

    The dress code for the event was semi-formal, and I had a small evening bag with me that held my keys, ID, and phone. Throughout the wine reception, I periodically checked my phone for messages and didn’t think much of it. However, once the concert began, I had to forcefully face the reality of the hold technology has taken over my life. Imagine the scene: a small auditorium, an intimately set stage, three musicians of the highest caliber (one is the principal violist for the New York Philharmonic, and the violinist and cellist have played for some of the most prestigious orchestras and music ensembles in the nation), and two pieces of music, one by Mozart and one by Beethoven. You could not ask for a better opportunity to be transported by the artistry and mystery of music. Yet, even as the incredibly beautiful sounds of the strings began, a tickle in my mind was telling me to look at my phone. For what? For a move in Words with Friends? For an e-mail alert? For a text? For a Facebook notification or status update? The sound was off on the phone, but I could still surreptitiously pull it out and look at it if I wanted to. I resisted the urge, and felt ashamed. Even as I was amazed by how horsehair drawn over strings could produce such complex and arresting sounds, even as I strove to meditate on the performance and concentrate all my focus on it, that tickle in my mind persisted.

    I know what is going on here, and it is part of my overall discontent with technology. I’ve read articles about the neurotransmitters that our brains emit in response to stimuli, and how researchers are finding that responding to our gadgets produces that same dopamine squirt. Dopamine is our brain’s way of rewarding us. It’s the little rush of excitement we get when a message or text or tweet arrives. It’s the same anticipatory feeling and satisfaction I used to get as a child when checking the mail: if I received a letter or card, my brain released that little squirt of pleasure. Our quest to feel it again is what has us checking our devices obsessively.

    I can blame the dopamine, but the dopamine does not have to control me. As I listened to the Mozart and the Beethoven, as I was simultaneously moved by the music yet distracted by my brain’s tickling desire for stimulation and reward, I resolved to wrest control back from the technology. There are things we can do: set limits on computer time (e.g., no web-surfing, Facebooking, etc. after 8 pm). Leave the phone in the car when attending an event like a concert, movie, or play. Ask ourselves if others really need or want to see photos of what we are eating or watching or doing, and resist the urge to Instagram everything. Get back to meaningful communication: send a card or letter instead of an e-mail; make a phone call instead of texting or emailing. It can be done, and I would go so far as to say it should be done. There are limits to everything. Technology has its place, and it can even improve and enhance communication and human relationships, but I believe we need to remember what it was like before we were inundated with technology’s relentless dopamine squirts.

    It is almost 8 pm – my curfew for technology. Perhaps you may decide to set one for yourself.

  • Technology and Its Discontents: A Preface

    Technology and Its Discontents: A Preface

    I suppose its ironic that I’m writing a rant about the drawbacks of modern technology using modern technology. Really, though, I want to write about something I’ve ruminated on at length already: communicative strategies. More specifically, I’m concerned about the ways in which modern technology is changing communicative strategies, and along with it, our approach to the world in general. Let me preface this with a new personal goal of mine: I want to unplug, at least once a week, from electronic distractions. On the opposite pole, I want to make sure I update my electronic rants more frequently; that is, at least once a week. Are these dichotomous goals? I don’t think so; but like all things, there is a limit to both. I find myself far too distracted by modern communicative technologies but simultaneously I sometimes feel that I don’t use those technologies as constructively as I could.

    So what’s the rant? This is a huge topic, but I want to start with unpacking the idea that the modern communicative technologies offered by computers, smart phones, tablets, etc., and in particular, the instant updates possible via social media, news sites, streaming video, et al., are enhancing our ability to communicate. I believe that these tools are actually decreasing our communicative abilities. How should we define communication? At the very least, it is the passing of a message from one individual to at least one other individual. That communication does not have to be face to face, or even ear to ear, but the point is that ultimately a message is transmitted. The human ability to transmit messages via what we call language is almost certainly unique to Homo sapiens; although other species do have complex forms of communication, there is tremendous debate over whether any non-human form of communication can be called language (a topic which may someday earn its own post). But if there is no one to receive the message, can the messages we send rightly be called communication? As just a small part of my overall questions about modern technology’s impact on communication, I often find myself wondering if we are mostly shouting into the dark. I would like to venture the hypothesis that modern technology is highlighting some of our species’ basest and most primitive inclinations.

    We are only ten or twelve thousand years removed from the time when all humans were living in small, close-knit tribal groups in which the survival of the individual depended on the survival of the group and vice versa. Yet it would be a mistake to assume that in small, egalitarian groups there was no social striving, no quest for power, no competition. All those things existed, but in general the needs of the group would check any one person from assuming too much power. Enter agriculture and more complex technology, and some of the checks on power and status-seeking began to be eroded. Agriculture made it possible for more people to survive with less effort, and to live in much larger groups where it became increasingly difficult to know every individual, much less communicate with them regularly. When you don’t know someone, that means you don’t need them; and if you don’t need them, there is no reason to care about that person’s survival. Fast far forward to today (and skipping over, for the time being, the cultural, social, and technological changes that ultimately led to the capitalist world-system in which we now live) and status-seeking is a prime motivator of human social, economic, and political behavior.

    What does any of this have to do with modern technology and modern communication? In a strange way, all these rapid-fire communication tools that are literally at our fingertips have made it possible for us to, once again, communicate with the entire group. This is not to say, of course, that every person’s status update or tweet or blog post is being transmitted to every person in the world. But, it is to say that we are able to pass messages to complete strangers, whether intentionally or not, and we are finding that those messages aren’t crafted carefully enough to avoid misunderstanding or insult or any number of misapprehensions. We are having to learn from scratch how to communicate deliberately and carefully, but all too often people are using what should be a fine-grained tool as a bludgeon. We can communicate with an enormous group, but we seem to take little, if any, responsibility, for the consequences of the messages we transmit.

    We are learning a new process the hard way. I am fascinated with how our adaptation to the modern communicative age will proceed. I will have much more to say about this in future posts; consider this a preface.

  • The Skeptical Method

    The Skeptical Method

    When I was at SDSU getting my MA, I had to write a paper for my graduate seminar in linguistic anthropology. I chose to write about the question of whether or not Neanderthals were capable of speech. In researching the topic (using the card catalog, books, and bound journal articles – yes, we had the interwebs in 1999 but real research still had to mostly be done the old-fashioned way), I discovered that there was a great deal of disagreement on the topic. Some researchers proposed that, indeed, the evidence supported the hypothesis that Neanderthals had the physical capacity to speak in much the same manner as modern humans. Others proposed that, while there was a cultural basis for proposing language skill, there was not any physical/anatomical evidence that supported the speech hypothesis. Still others proposed that Neanderthals surely were capable of symbolic communication, but not at the sophisticated levels attained by modern humans. After sorting through the major arguments, I centered my paper around a discussion of each one, analyzing the strengths and the weaknesses, and ultimately concluding that while all of the major hypotheses were possible, more work needed to be done before reaching a conclusion, and that I found it likely that bits of each hypothesis would end up being validated (FYI – since I wrote that paper, a fossil Neanderthal hyoid bone has been discovered. This bone is what makes human speech possible. That, along with the presence in Neanderthal crania of a hypoglossal canal essentially identical to the one through which speech-related nerves connect to the brain in modern humans, makes the complex Neanderthal speech hypothesis into more of a true, scientific theory).

    My linguistics professor lauded me for a thoroughly researched and well-written paper, but his final comment was this: “You can’t just write about what others think. At some point, you have to pick a theory and take a stand.” This bummed me out. I hadn’t taken a stand because no single hypothesis had completely convinced me. If I had been forced to choose, I would have sided with those who argued for complex Neanderthal speech, but I wanted to leave the options open. This, in my mind, is what makes the scientific method so brilliant: it leaves the options open.

    For those of you in need of a refresher, the scientific method basically goes like this:

    1. A verifiable truth, or fact, is observed.
    2. A tentative explanation of the observed fact, or hypothesis, is proposed.
    3. The hypothesis is tested.
    4. If the hypothesis is disproved, it must be rejected as an explanation of the observed fact. If it is not disproved, then the hypothesis may be provisionally accepted.
    5. If a hypothesis survives repeated rounds of testing, and continues to not be disproved, then it gains the status of theory.

    This requires several important corollaries to be properly understood. Probably the most important is that, for a hypothesis to be scientific, it must be testable and you must be able to disprove or reject it. This is because a scientific hypothesis cannot be proved – it can only be disproved. There are tons of hypotheses out there that are not testable, or at least, not testable with current technology. For example, you observe that the universe exists. You hypothesize that a supernatural creator designed the universe. This is a perfectly acceptable hypothesis, but it cannot be tested and rejected and is therefore not a scientific hypothesis. And that brings up another important corollary: the common vs. the scientific understanding of the words theory and hypothesis. The common understanding is that calling something a theory or a hypothesis means it is no better than a random guess: “Evolution is just a theory.” To which I answer, well, yes: evolution is a scientific theory, which means it has survived so many rounds of so many different kinds of tests, and never once been disproved, that it has attained a status almost as binding as physical laws. Another important corollary is predictive power. The best theories have predictive power, which means that you can reasonably expect to predict certain things to happen with the theory as your guide.

    But finally, my favorite corollary of all: no matter how many rounds of testing your theory has survived, if it should ever fail, you have no choice but to go back to the drawing board and revise your original hypothesis. This, in a nutshell, is how science works, and it is how we are explaining and doing things now that were unimaginable even a few generations ago. Science builds on its mistakes. Science learns from its failed experiments. And the best scientists are the ones who are willing to go back and look again, and revise, expand, edit, and redraft their hypotheses and theories to make them even better at being predictive. (A cautionary note: evolution, in particular, is a scientific theory that its detractors claim is still controversial within the scientific community. Nothing could be further from the truth. Evolutionary theory, to me, is truly one of the simplest and most elegant scientific explanations ever proposed. It is the grand unifying theory of biology and all life science, and it undeniably works. What is still debated within the field of evolution are many of the more specific mechanisms of evolution’s operation. A great example of this is the debate over group selection vs. individual selection, which is robustly debated by no lesser thinkers than E.O. Wilson and Richard Dawkins. But neither Wilson nor Dawkins would ever think to propose that natural selection itself is in question. Although, as brilliant scientists, if it were ever disproved they would have to reject it!).

    Ok, so now you should hopefully have a grasp of the scientific method, and I can get to what I really want to say in this post: I use the scientific method in my attempts to come to conclusions about almost every issue I come across. However, many of the more controversial issues that people find important nowadays are not amenable to the rigorous testing portion of the scientific method. Instead, I am calling this the skeptical method. It works in the same way as the scientific method, especially as far as the “disproving” portion is concerned. Here’s how it works:

    1. Encounter a controversial issue (say, whether or not to support the Affordable Care Act).
    2. Propose a tentative opinion of the controversial issue.
    3. Gather data against which to test the tentative opinion.
    4. Accept, reject, or revise the opinion as needed based on the data.
    5. Be prepared to revise the opinion based on new information.

    As with the scientific method, the skeptical method has corollaries. First, controversial issues are controversial for a reason. They are often in conflict with people’s deeply held values. While I would consider these to be subjective, not scientific, they are still very important in the formation of the tentative opinion. Second, the skeptical method requires deep and careful critical thinking to be truly effective. This often takes work. If you are not willing to do the work it takes to support your tentative opinion, then it remains tentative. If you have done critical, careful research, and feel confident in the data you are using to support your opinion, then you might stop being tentative and become more confident of your opinion. But just like in science, you have to be able to support your hypothesis, and you have to be willing to consider new data. Third, people will very often disagree with you and marshal their own body of research in opposition to your opinion. It behooves the critical thinker to consider your opposition’s arguments. Even if you still disagree in the end, you will strengthen your argument by understanding the argument of the opposition… and sometimes, you might actually change your opinion. Even though this might seem like a defeat, it’s not. The skeptical method is not about personal victory; it’s about understanding the world. Part of that understanding involves the realization that sometimes – oftentimes – good, rational, moral people can have deep and irreconcilable differences of opinion. Sometimes, thoughtful, intelligent, compassionate people can take the same data and come up with completely opposite opinions. Sometimes, you will feel the urge to fling poop at that person’s head because you are so frustrated that they don’t seem to understand your beautifully reasoned and elegant opinion! Don’t do it. The true critical thinker accepts and understands that she may feel like she couldn’t possibly be more right, and there will always be someone who thinks she couldn’t possibly be more wrong.

    I want to end by saying that I do find facts to be much more compelling than opinions. Daniel Patrick Moynihan once said that “people are entitled to their own opinions, but not entitled to their own facts.” If your opinion is supported by fact then I respect that. However, I am becoming more and more concerned by what appears to be a trumping of opinion over fact in our national discourse – a topic that deserves its own post.

  • The Evolution of Pink Slime

    The Evolution of Pink Slime

    So-called pink slime has been all over the news lately. Friends have posted links and comments about it on Facebook, I have heard stories about it on NPR, and I’ve heard people talk about how they can’t believe our government would allow the meat industry to sell the stuff as food. Pink slime, known formally as lean, finely textured beef trimmings (LFTB), is certainly not a food product that is likely to provoke anticipatory salivation. The term pink slime is itself deliberately crafted to instead provoke a reaction of disgust. And, the associated news that the stuff is treated with ammonia to remove potentially harmful bacteria just adds insult to our collective sense of injury. But there’s a problem here: making decisions about what to eat based on a visceral reaction to something that has been uncritically dubbed with a description designed to elicit just that reaction is not a way to make choices about what we eat.

    I find the whole uproar rather silly, myself. First let’s tackle the linguistic angle: the name pink slime. Some might argue that it’s misleading to relabel this edible meat substance with a name that does not reveal what it really is. The name “lean, finely textured beef trimmings” does not evoke the actual cow parts that are used to make it. The stuff is made by combining fatty trimmings and ligament material from the cow and spinning it in a centrifuge to separate the fats. It is pink and looks slimy; hence the media-friendly and consumer-alarming moniker “pink slime.” One thing to note is that this stuff is not sold as-is; it is combined with regular ground beef as a bulk additive, and can be up to 30% of the final ground beef product (whether raw, bulk meat or items such as hamburger patties). So we are not unwittingly consuming unadulterated pink slime; nor is it being fed as-is to kids in school. A second thing to note is that we use all manner of euphemisms to describe the things we eat, especially when it comes to meat products. Filet mignon sounds much more appetizing than “hunk of cow flank.” Bacon cheeseburger stimulates the appetite in a way that “salted fatty pig belly cheeseburger” does not. In fact, when raw, most meat is pretty slimy, so we might as well add that adjective to all our meats. The point is that these are subjective reactions. Call things what they really are and lots of people might think twice before eating them. It reminds me of the failed “toilet to tap” initiative that was proposed in San Diego several years ago. Once the descriptor “toilet to tap” caught on in the media, there was no way the public would abide this water treatment program, even though the reclaimed water from the sewer system was just as pure and clean as regular municipal tap water. The name killed it because people could not reconcile themselves to water that came from the toilet, no matter how much scientific evidence there was that the water was clean. I find this fascinating in light of the fact that municipal tap water is held in reservoirs before treatment, in which people drive boats, fish, and probably urinate, and which is filled with all sorts of animal and plant matter, both alive and decomposing.

    My second issue with this uproar has to do with food supply in general. There are seven billion people on this planet. They all need to be fed. In many places people subsist on foods that we here in the US would find appalling, and not merely because of cultural differences, but because some people are so poor that they will eat whatever they can. Our objection to LFTB is a beautiful example of a first-world problem. I know many people are rethinking where their food comes from and signing on to local food and slow food movements, and that’s all well and good, but within a country like the US, that is (for the most part) an upper-middle class movement. Poor people in this country do not have the luxury to worry about where the food comes from, much less exactly what is in it. For a poor family, knowing the kids will at least get lunch at school is a bigger concern than whether or not that lunch may contain pink slime.

    When agriculture arose 10,000 years ago, humanity began the evolutionary road towards pink slime. Agriculture allowed previously nomadic people to become sedentary. Sedentism led to expansions in technology and booms in population. Ultimately, agriculture allowed for centralized cities ruled by top-down leaders, supplanting the egalitarian cultures of hunting-gathering and small-scale agricultural groups. Technological innovations continued to abound and populations continued to boom, and to feed all those people, intensive, factory-driven, and mechanized industrial agriculture became necessary. Can we really turn back that process now, and all start growing our own gardens and raising and slaughtering our own livestock? I’m not talking a fancy herb garden, heirloom tomatoes, and hobby chickens; I’m talking feeding yourself and your entire family by the products of your own labor. We do not live in that world any more. We live in a world where a beef supplier will use every part of the cow. Our industrial food complex has grown so efficient that almost nothing goes to waste.

    I’m not blind to the fact that the beef producer is also trying to turn as much profit as possible; this is capitalism, after all. But I have no objection to seeing otherwise wasted parts of the cow get turned into an edible substance. As for the ammonia gas issue, it is simply a way to make the stuff safe. A chemical like ammonia is certain to provoke another knee-jerk: it’s in glass cleaner! It’s a poison! Well, yes; but without understanding how the process works people somehow conjure a picture of the pink slime getting dipped in a bright-blue Windex bath, which is far from the case. I can see the other side of the coin if the stuff didn’t go through this process: how dare the government allow us to eat beef that has not been treated for bacterial contamination! (Which reminds me of another rant I have against what I see as a massively over-reacting food safety process in this country; I think it’s ludicrous to destroy thousands or even millions of pounds of a food because a few people got food poisoning – but that’s a rant for another day). In fact, much of our food goes through similar sanitizing processes to prevent illness. As far as I can tell, no one has ever died from eating ammonia-treated LFTB, but they have died from food poisoning caused by the very bacteria the ammonia treatment is designed to prevent.

    I can understand, to a degree, the people who argue that we have a right to know what is in our food so we can make an informed decision about whether to consume it, and I don’t object to the idea of more comprehensive food labeling. However, I still think this is a first-world and middle-class problem. How many people actually read food labels? Yes, the information should be there, but then the consumer does have some responsibility to think critically about what they see on the label if they decide to read it. I would bet that many of the people upset about pink slime have never bothered to really research what is in the other foods they buy at the store. Some people make it a point to not buy food products with lots of chemical additives and unnatural ingredients, but that is a tiny minority. Most of us are happy with the “ignorance is bliss” approach; and I would argue that if we didn’t take that approach then we might be paralyzed with largely unnecessary worry. Does anybody ever really stop to think about how many other people’s mouths have been on the fork they use at a restaurant? Wow, that’s gross, isn’t it? Of course the dishes at the restaurant are cleaned between uses, but to me, pink slime is no more dangerous than using a cleaned fork that has been in 1,000 different mouths. It’s gross if you think about it – so, don’t think about it!

    This controversy will fade as other things grab people’s attention, but what I fear is that whatever the next issue is, people will still have the same knee-jerk, uncritical reactions. Sometimes those reactions turn out to be completely justified, but that is irrelevant to the initial reaction. People need to come to conclusions that rely on more than a sound bite and an unappetizing label or picture that is designed to grab attention. Thinking critically means gathering facts and forming a provisional opinion that may be modified in light of future information. Being grossed out is not a good reason for objecting to a food product.

  • Economic Maladaptation

    Economic Maladaptation

    Near the end of the semester in my Human Origins course, I teach about two concepts: the epidemiologic transition, and the demographic transition. Both of them have to do with overall improvements in quality and length of life in societies that have reached a certain level of knowledge and wealth. In the epidemiologic transition, knowledge and innovations regarding health and medicine combine to reduce the incidence of infectious disease, and generally increase the overall health and longevity of the population. Mortality from non-infectious diseases such as heart disease and cancer increase as life expectancy increases. So, instead of dying young of an infectious disease, you live longer and ultimately perish from a disease or condition linked to old age and/or the consequences of a Westernized lifestyle (such as poor diet and lack of exercise). Combine this with the demographic transition, which sees life expectancy increase, and a drop in death rates followed by a drop in birth rates as societies industrialize and modernize, and you have a perfect recipe for booming population growth. Not every society in the world has gone through both of these transitions, but enough have that what should have been viewed as benefit is now becoming a detriment. Put in evolutionary terms, what was once adaptive is becoming maladaptive. I hypothesize that it is not these two transitions themselves that are to blame, but yet another transition, which I am going to call the economic transition.

    So what is the economic transition? As the world has modernized, starting at least four centuries ago with the age of European exploration and colonization in the 16th and 17th centuries, the global economic system that we know today as capitalism has taken hold. Capitalism, and the quest for profit through the exchange of material goods, is in many ways a beneficial and adaptive system for human groups. However, link it to the natural human desire to achieve status, and then link status to the ownership of material things and the symbols of exchange that make that ownership possible (i.e., money), and add in the longer-lived and massively expanded human population we are dealing with today, and you have a recipe for maladaptive disaster. Capitalism, superficially, is extremely similar to biological evolution and natural selection – call it economic selection. Left to its own devices, the natural consequence of capitalism is to concentrate wealth in the hands of a very few people or groups. This can work in some circumstances, especially when the social group affected is reasonably sized. Competition can lead to greater resource acquisition for the overall group, which is then redistributed by the leaders who had the greatest hand in acquiring it. This is what happens in the Big Man system that used to characterize many native economies in places like Papua New Guinea. The Big Man worked hard and gained followers who worked on his behalf to grow the biggest garden and the largest herd of pigs, and as harvest or slaughter time came, he rewarded his followers for their hard work. Those who worked the most gained the most, but nobody went without basic necessities. Why? Because in small groups, the well-being of the group depends on the well-being of the individuals who comprise it. The Big Man, for his part, was well compensated for his leadership efforts, but he did not end up with portions that were much larger than those of his workers; his gains instead had to do with status and leadership power (which from an evolutionary standpoint tends to correlate with greater reproductive success – to me, this is the underlying impetus for the development of these sorts of systems).

    The Big Man system is a sort of proto-capitalism. Anybody could aspire to be a Big Man, and with enough hard work and charisma, individuals could work their way into the top status tiers of these groups. The key difference is that the Big Man did not keep the majority of the wealth for himself, and he did not attempt what true capitalism attempts today: gather the most wealth possible while paying as little as possible to acquire it (whether for raw materials, workers, overhead, or what have you). The capitalist world system is designed to concentrate wealth. It is theoretically true that anybody can compete in this system, but with 7 BILLION competitors, success is anything but assured, and the structural obstacles to reaching that success are more numerous and complex than I can possibly attempt to explain in one post.

    I still haven’t really explained the economic transition yet, because at least a basic knowledge of economic systems is important. Nevertheless, I can describe it simply as a transition from small-group based competitive yet redistributive systems to a system based on personal financial gain that thrives on the perpetuation of class inequality. In a survival of the fittest economic system, inequality is the only possible outcome. What’s even more insidious about this transition is that even those at the bottom of the class and income scale believe that this is the way it is supposed to be, and that the only way out is through acquisitiveness and consumption. I have already written on this at some length in posts discussing hegemony. This is hegemony in a nutshell. The economic transition is maladaptive because it relies on continued resource consumption, and it is linked to the large and long-lived global population that consumes those resources. The economic transition, if it continues to its logical conclusion, ultimately means the ultimate biological maladaptation for the human species, to wit: extinction.

    I actually didn’t mean for this post to be a treatise on my view of our world’s economic problems, but these things just come out as they come out. This is the starting point for many more specific posts to come. What started my ruminating on this particular topic (other than the fact that I ruminate about it just about every single day) was thinking about our obsession with material things, and wondering how in the world we can save ourselves from ourselves. How can we modify the system so that status comes from the person you are rather than the things you own? How can we actually slow down the economic engine, and adopt a philosophy of economic balance, instead of constant growth? When will we realize that the values of our lives come from experiences, rather than possessions?

    Let me end this post with a question for my readers: what are your fondest memories? What makes you smile when you need a boost? Do these memories revolve around things, or people and experiences? One of my favorite memories, one I call on when I want to feel happy, is from 2001 when I surprised my mom by coming home from Albuquerque for Christmas one week early. I called her from outside her front door. As we spoke, I made it sound like I was still in Albuquerque. I knocked on her door and laughed to myself as she said “Hold on sweetheart, there’s someone at the door.” She opened the door and saw me, and I will never, ever forget the look on her face or how she dropped the phone and grabbed me into a hug of joy. This is a memory that I could never buy, yet it makes me happier than any material thing I have ever owned. I think that if we consciously remember what truly has given us joy in our lives, it may lead us out of the materialism = happiness lie that so many forces are leading us to believe.