KYLE CARD
  • Home
  • Research
  • Publications
  • Mentoring
  • Outreach
  • Repositories
  • CV
  • Blog

Bad Science, Biases, and Big Data: The Shifting Landscape of Responsible Conduct of Research

4/24/2018

0 Comments

 
Challenges associated with the responsible conduct of research (RCR) tend to change alongside a shifting landscape driven by technological advances. In the late 1990s Andrew Wakefield reported in the Lancet that a causative link exists between the use of the measles, mumps, and rubella (MMR) vaccine and the appearance of autism. By the time it was discovered that he had financial conflicts of interest, subjected children to unnecessary and invasive procedures without approval from an institutional review board, and fabricated and selectively disregarded data that rejected his hypothesis (Ferriman, 2004), the damage had already been done: parents were electing not to vaccinate their children against deadly viral infections, and a powerful anti-vaccination movement was born.

Many of these groups found new audiences on internet forums and message boards to spread their misinformation. Then later their outreach grew dramatically through the adoption of social media platforms and celebrity endorsements. During this time, the incidence rate of vaccinable childhood diseases was ascending (McIntyre and Leask, 2008). If one considers a counterfactual scenario where Wakefield published his results decades earlier – before the advent of the internet – the consequences of such an action may have been less stringent in comparison. The misconduct of one unethical researcher conducting poorly done medical research can therefore be linked to healthcare challenges eased by the increasing spread of misinformation. These challenges are likely to rise in frequency and severity in the coming decades given the increasing pervasiveness of global communication. But above all, this case study is indicative of a much larger problem.

Many scientific studies are not reproducible. Two main factors cause this phenomenon. First, the “file-drawer effect” occurs when there is a systemic publication bias toward reporting positive results and disregarding negative ones. When many independent studies test whether a relationship exists between two variables, some will undoubtedly show significance even when no relationship in fact exists. These positive results are driven by chance (Munafò et al., 2017). In these instances, it is not the positive results that are interesting but their unpublished negative counterparts. Researchers who then repeat the study expecting a positive result will not be able to replicate it. Second, when a study initially shows no effect between two variables, researchers sometimes add more replication in the hope that a significant effect will be found. This practice often occurs under the misguided notion that by adding more observations one is getting closer to the truth. Instead, the addition of more observations increases the likelihood of getting a significant effect by chance alone. It is important to note that these factors speak to implicit biases and therefore should be separate from instances of scientific misconduct. To err is human, and non-reproducible results do not necessarily point to duplicity on the part of a research team.

Nonetheless, these factors may have unintended consequences for the advancement of science. Research groups may decide not to report study replication failures or deviations when they believe a priori that the earlier findings must represent the true state of a given phenomenon. For example, in the early 20th century physicist Robert Millikan measured the charge of the electron to a certain level of precision using an oil drop experiment. The experimental setup was elegant, but he did not correctly factor into his calculations the viscosity of air. He therefore reported an incorrect electrical charge. In the following years after Millikan’s mistake, researchers slowly landed on the true value after conducting their own experiments (Feynman, 1974).

Richard Feynman proposed an argument for why it took researchers years to correct Millikan’s original estimate: when they obtained a result that deviated too much from Millikan’s, they assumed their data must in error – and they discarded the results in turn (Feynman, 1974). Confirmation bias had stymied the pace of scientific progress, and it seems that sometimes there are tradeoffs when standing on the shoulders of giants. Feynman cautioned against our natural tendency to fool ourselves and urged us to stray from the precepts of “Cargo Cult Science”:
Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can – if you know anything at all wrong, or possibly wrong – to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it … In summary, the idea is to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgement in one particular direction or another (Feynman, 1974).
We have the potential to improve scientific reproducibility and integrity using technology in the way Feynman had envisioned. Because integrity can unintentionally be compromised at many stages within the scientific process through failure to control for biases during hypothesis generation, improperly designed experiments including randomization and blocking, quality control while carrying out an experiment, and the analysis of data using improper statistical methods, study pre-registration is becoming increasingly popular in clinical medicine and in other fields (Munafò et al., 2017). Pre-registration occurs when a research team details all aspects of the study design, potential outcomes, and statistical analyses before conducting the experiment proper. This framework, and any data derived from the study, are made public on the internet for others to examine and critique.

The benefits of adopting a pre-registration philosophy are multifaceted. First, it guards against the file-drawer effect. If all data are made publicly available, then negative results become discoverable by other researchers. Second, it protects against the consequences of researcher biases by requiring that analytical and statistical methodology be developed before observation of the data, so that decisions remain data-independent (Munafò et al., 2017). The adoption of an open science philosophy of scientific honesty and transparency will do well to curb the negative effects of research misconduct and implicit researcher bias. However, other aspects of RCR including data management and analytics are becoming increasingly more important and challenging as well.

Data are becoming more voluminous. The ability to generate massive amounts of data can easily outstrip the ability to store and share them. For example, data storage and sharing are major problems for Large Hadron Collider (LHC) experiments because they generate enormous amounts of data in a short amount of time. Accordingly, the annual data rate exceeds 30 petabytes after extensive filtering (CERN). The sheer volume of data would have made these studies impossible to conduct before technology caught up with our scientific ambitions: thirty years ago it would have taken a stack of 21 billion floppy disks – 43 thousand miles high – to store one year’s worth of data; now it takes 11,000 servers with 100,000 processor cores at CERN and a worldwide computing grid. Similarly, it initially took researchers many years to process and map the human genome. Now researchers can process this data quickly at a fraction of the cost. Ambition alone cannot drive some scientific discoveries; oftentimes technological infrastructure must be in place beforehand. That said, the ability to generate and store large volumes of data is only one challenge in the big data epoch.

The allure of big data has led some to eschew the scientific method altogether. Former editor-in-chief of Wired Magazine Chris Anderson said in his provocative essay The End of Theory: The Data Deluge Makes the Scientific Method Obsolete:
Petabytes allow us to say: ‘Correlation is enough.’ We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot (Anderson, 2008).
Similarly, Google’s research director Peter Norvig amended the commonly-held aphorism “All models are wrong, but some are useful” by noting that we can often succeed without them. Anderson and Norvig believe that by adhering to the scientific method, one will often reach mistaken conclusions because the method itself is arbitrary: it relies on a logical framework that leaves room for subjectivity and bias. We should instead replace the scientific method with big data analytics because “with enough data, numbers speak for themselves” (Anderson, 2008). Powerful computational algorithms can explore large data sets and find regularities independent of any subjectivity or bias on the part of the researcher. The effectiveness of this approach increases as the data gets larger (Calude and Longo, 2017). In my opinion, this Wild West approach to shooting from the hip is dangerous, and all that is missing is the tumbleweed.

Spurious correlations often scale with the volume of data. This notion is similar to the one I presented earlier in this essay: the a posteriori addition of more experimental replicates increases the likelihood of finding a significant effect caused by chance alone. However, the difference with big data experiments is that the sheer quantity of information generated is an effect of the process itself, not the intent of the researcher to find positive results per se. Calude and Longo proved mathematically that the number of arbitrary correlations only depend upon the size of the data set, and they can even appear from data that are randomly generated. In fact, their results imply a general trend of diminishing returns – the more information one has, the more difficult it is to glean meaning from it. Replacing the scientific method with big data analytics is therefore not advisable, and researchers must take great care to remember the maxim “correlation does not imply causation.”

Responsible conduct of research is like an intricate dance. It encompasses much more than abstaining from duplicity. To conduct science with utmost integrity we must understand and limit our biases during all stages of the scientific process: from hypothesis generation, to publication of results, to the analysis of big data. We can limit biases by adopting an open science policy of transparency, and learning proper experimental design and statistical methods to address our research questions. But most importantly we ought to adopt a philosophy of utmost humility - as my good friend and colleague Zachary Blount says, "Hypotheses are not children - they have no inherent worth or dignity, nor any right to exist. They have to justify themselves. Test the daylights out of them."

References
  1. Anderson, C.  2008.  The end of theory: The data deluge makes the scientific method obsolete. Wired Magazine. Retrieved from https://www.wired.com/2008/06/pb-theory/.
  2. Calude, C. S. and G. Longo.  2017.  The deluge of spurious correlations in big data. Foundations of Science. 22, 595 – 612.
  3. CERN. Computing. Retrieved from https://home.cern/about/computing.
  4. Ferriman, A.  2004.  MP raises new allegations against Andrew Wakefield. BMJ. 328, 726
  5. Feynman, R.  1974.  Cargo cult science. California Institute of Technology Commencement Address.
  6. McIntyre, P., and J. Leask.  2008.  Improving uptake of MMR vaccine. BMJ. 336, 729 – 30.
  7. Munafò, M. R., B. A. Nosek, D. V. M. Bishop, K. S. Button, C. D. Chambers, N. P. du Sert, U. Simonsohn, E. J. Wagenmakers, J. J. Ware, and J. P. A. Ioannidis.  2017.  A manifesto for reproducible science. Nature Human Behavior. 1, 1 – 9.  
0 Comments

Inclusivity for Aspiring Disabled Scientists: Can We Do More?

8/15/2015

6 Comments

 
Over the course of my first year in graduate school I have had wonderful conversations with brilliant women scientists about their unique experiences. These conversations have caused me to reflect on my own experiences in turn. Although I’m not a woman scientist, I am a disabled scientist, and the two share some similarities including feelings of discrimination by others, whether it’s intended or not, and apprehension caused by fear of being stereotyped. Also, it needs to be said that these feelings aren’t unique to women or the disabled; they are often shared by many minority groups including those of different races and ethnic backgrounds. However, for the remainder of this post I will focus solely on my experiences as a disabled scientist. I hope that you, the reader, will come away a bit more informed than you were going in. Also, I hope that you come away asking yourself whether institutions are doing enough to encourage inclusivity for disabled individuals who yearn to be researchers, and what, if anything, ought to be done.

Before I get any further I must first talk a little about myself. I have Moebius Syndrome. It’s a very rare congenital neurological disorder characterized by underdeveloped VI (Abducens) and VII (Facial) cranial nerves. Hence, those of us with Moebius lack all forms of facial expression; this is evident to others in our inability to smile. However, there are other symptoms that may be just as evident: we can’t frown, blink, nor move our eyes side-to-side. Others, including myself, have limb abnormalities caused by associated syndromes; I don’t have fingers on either hand, I’m missing toes on my left foot, and I don’t have a right foot at all and compensate by wearing a below-the-knee prosthetic leg. Lastly, due to our inability to move our mouths in a coordinated fashion, we often have speech difficulties.

I’ve learned a great many things about myself, and the human condition as a whole, by viewing the world through the lens which Moebius has afforded me. For instance, I’ve learned to cherish the differences in everyone, and I find it abhorrent when differences are used as a means for discrimination, in all of its forms and stripes. On a similar note, I’ve also learned that in all of us there is a propensity to be unaware of our blind spots where biases often stowaway. However, the most important thing I’ve learned is to be honest with myself about my disability. For when this happened I also learned to be honest with myself about all other aspects of my life; for all honest self-reflection that came afterward fell short of the realization that I live with something that I can never change. I’ve learned to use this constant in my life as a source of strength, and in this way I feel incredibly grateful.

Despite all of that I still have doubts and concerns that seep in. My greatest concern in practicing science had been how people perceive me and my ability to do good science. Would they think me capable as everyone else? Would my inability to smile come off as disinterest when talking science with others? These doubts were worrisome when I first wrestled with the idea of applying to graduate school, and particularly worrisome in the recruitment process. I certainly felt like I had more to prove than the cohort around me. Ultimately though, my desire to be true to myself and follow what I love was greater than the summation of my apprehensions.

After a year into my graduate school career, I am happy to say that every experience I’ve had has made those initial apprehensions rather unfounded. As a student and researcher at Michigan State, every mentor I’ve had, every faculty member I’ve encountered, and every fellow student I’ve met and befriended, has accepted me for who I am: a disabled, yet, capable scientist; this has made all the difference to me.

Hindsight is a wonderful thing; now I can look back on my experiences and try to learn from them. Since I accepted my program’s offer, I have wondered if the majority of other disabled scientists have followed a similar path as myself. Also, I am particularly interested in knowing how many disabled individuals have discouraged themselves out of not approaching STEM fields in the first place. This all brings me to the question I would like to ask you, dear reader: do you think we are doing enough at the institutional level to dismantle perceived “barriers to entry” into STEM fields for individuals with disabilities? I realize that many institutions have diversity groups made up of students and faculty who support and encourage minority students, while simultaneously acting as liaisons between these students and the institution itself. However, how many of these groups seek to encourage disabled individuals who may be apprehensive about joining in the first place? Moreover, how many of these groups are actively trying to foster a culture within the institution that is disarming and welcoming to the hesitant, yet qualified, aspiring disabled researcher? Looking back on my own experience, and on data provided by the NSF showing a severe underrepresentation of disabled individuals in STEM, I don’t think enough is currently being done.

I think a great start to tackling these problems lies within outreach to the disabled community of aspiring scientists, at the secondary and post-secondary levels, by those of us who are disabled. This is a goal of mine as a graduate student and researcher at Michigan State, and beyond. If I can help knock down the perceived boundaries that even one disabled individual has, then I will have succeeded. Further, I think that institutions and campus diversity groups ought to closely examine their own inclusivity policies, and brainstorm ideas on how to focus more on easing apprehensions of aspiring disabled scientists and researchers. On the whole, I hope that these words will spark a dialogue about the underrepresentation of disabled individuals within STEM and what we can do to change that.
6 Comments

When I Heard the Learn’d Astronomer

1/30/2015

1 Comment

 
One of my favorite poems:

When I Heard the Learn’d Astronomer
by Walt Whitman

When I heard the learn’d astronomer,
When the proofs, the figures, were ranged in columns before me,
When I was shown the charts and diagrams, to add, divide, and measure them,
When I sitting heard the astronomer where he lectured with much applause in the lecture-room,
How soon unaccountable I became tired and sick,
Till rising and gliding out I wander’d off by myself,
In the mystical moist night-air, and from time to time,
Look’d up in perfect silence at the stars.

1 Comment

What is Evolution?

1/30/2015

0 Comments

 
The theory of evolution: there is probably no greater disconnect between what scientists seek to explain and what the general population of non-scientists thinks we explain. Moreover, one should note that this contention is only between science and the public, not between scientists themselves. In fact, not only do the overwhelming majority of biologists accept that evolutionary theory is the only explanation that can fully account for the diversity of life, but so too does every other major scientific association around the world [1]. It has been a long-standing practice and goal of mine (and an aim of this blog) to ferret out scientific misconceptions while helping others learn. Therefore, in this blog post I hope to explain, in a very general sense, what the theory of evolution is and then give an example that I feel will be accessible to those who are unfamiliar with biology.

Firstly, we need to define what evolution is. Put simply, evolution is the change in populations of organisms over time, and starting from this observation, we can explain the diversity and panoply of life. (I should note here that this definition isn’t as precise as it should be, and we will add on to it later on when we have fleshed out some details.) Every individual within a population of organisms has a genome (the complete set of all of its DNA) that can be passed, in part (in sexually reproducing organisms), on through generations from parents to offspring. DNA itself carries all the instructions needed to carry out life-processes and it does this through genes. A gene is a stretch of DNA that carries the instructions to make a protein, and proteins are the “work-horses” of life. You can think of a gene like a sentence, and a protein as a message or idea that the sentence coveys. There is one other thing I must mention about genes: there exists alternate forms of each gene, called alleles. These alleles have slightly different gene sequences, and hence, they have instructions to make proteins with slightly different structures and characteristics. To continue with the analogy, “the boy kicked the ball” and “the boy kicks the ball” are similar to alleles in the sense that they are slight variants of a sentence that provide slightly different information. 

Furthermore, most plants and animals have two sets of chromosomes (packaged DNA within the cell); one set comes from the mother and the other set comes from the father. In addition, each chromosome contains one set of alleles; these organisms are said to be diploid. So, ultimately the offspring receives two alleles for any given gene, one allele each from the parents. The specific combination of alleles across the genome will lead to a plethora of various phenotypes – characteristics, or traits – expressed by an individual. Let’s look at a specific, and contrived, example of eye color in animals. Let’s say that the father passes along his eye color allele to his offspring, and it provides the instructions for making a protein that gives his child brown eyes, and the mother passes along her eye color allele that provides instructions for making brown eyes as well. In this case, the child will end up with brown eyes. Now let’s say the mother passes along an allele for blue eyes instead. Somewhat surprisingly, the offspring will still have brown eyes. In both of these cases the brown eyed allele is said to be dominant; one brown eye allele is sufficient for brown eyes regardless of what the other allele is – it “masks” the trait that the other allele would provide. Now, let’s assume both parents pass along an allele for blue eyes to the offspring. As you probably would have guessed, the offspring will now have the blue eye phenotype. Now we get a sense that the blue eye allele is recessive – in order for the blue eye trait to show in the offspring there can be no brown eyed allele present. To put all of that simply, the combination of alleles that an organism has will have a huge impact on its traits and characteristics.

To expand upon this, let’s delve a little deeper. All individuals within a population of organisms are not genetically identical. This is intuitive but it is central to evolution. We only need to look toward the huge phenotypic diversity in humans to understand this: we have many different skin colors, hair colors, eye colors, sizes, shapes, metabolisms, attached/detached earlobes, hereditary diseases, propensity for athleticism, etc.; no two humans are the same (except in the case of identical twins). Populations of organisms, whether they are bacteria, humans, dogs, fish, palm trees, you name it, all have genetic variation. Genetic variation within a population occurs when there is more than one allele present at a given locus (location within the genome) within a population. This genetic variation allows for the differential expression of traits between individuals, and it is the raw material that evolutionary processes act on.

This is where natural selection comes into the fold. Natural selection is the process where given phenotypes become more (or less) widespread within a population of organisms over time if those phenotypes increase (or decrease) an organism’s survivability and reproductive success (fitness) within that environment. This makes intuitive sense: if organism A has a trait that makes it better adapted to survive and reproduce in its environment than organism B, then organism A will pass along that trait (and the alleles that cause it) to its offspring more readily than organism B. In other words, organism B is more likely to be out-competed by organism A. Over time, alleles that confer a greater fitness advantage to organisms within their environment will increase in frequency within the population, whereas alleles that confer neutral or diminished fitness will decrease in frequency (frequency here simply means proportion). The effect of natural selection is that, over time, populations become better adapted to their environment. Now we can finally get to the precise definition of evolution: evolution is the change in allele frequencies within populations over time.

Now, let’s try to tie this all together using a simplified example. Let’s say a fish acquired a mutation in an allele and she passed that allele onto her daughter. Let’s also say that the mutated allele is dominant and causes small fins (phenotype). As a consequence of her small fins, she will struggle to swim as fast or as well as her siblings, and she will be less adept at catching prey and more vulnerable to predation. Unfortunately due to her phenotype she will have a greater chance of not surviving to reproductive age where she would pass on the mutated allele to her offspring. If she dies before she reproduces, that allele will be purged, or eliminated, from the gene pool.

Conversely, a fish that acquires a mutation in an allele that gives her larger fins will have a higher probability of escaping predators and catching prey. Therefore, she will be more likely to survive to reproductive age and pass along that mutated allele to her offspring. Furthermore, her offspring will now have an advantage over those fish without the allele, and they, again, will be more likely to survive and reproduce. Over successive generations, that allele will raise in frequency within the gene pool, so over time more and more fish within the population will have larger fins. The population as a whole will be more fit within that environment. Notice how the environment selected for those with larger fins and against those with smaller fins - this is natural selection. Notice how inaccurate it is to call this process of differential reproductive success, “random”, as many creationists tend to do. In fact, it is completely non-random.

Lastly, and as more of a side note, most creationists will say that the example I provide above describes the process of microevolution, or change in allele frequency within a population over time – and they would be right in making this assertion. As a general rule, they tend to agree that microevolution occurs; even they can’t deny that evolutionary phenomena such as antibiotic resistance in bacteria is a large problem. That being said, they generally do not accept macroevolutionary changes above the species level, such as speciation events (the formation of new species), common ancestry, descent with modification, etc [2]. However, this is silly because macroevolutionary changes occur via the same mechanisms as microevolutionary changes, just on a larger time scale. Saying that macroevolution can't happen while microevolution can is like saying I can take a few steps forward but I will never reach the other side of the room: large changes occur by the accumulation of small intermediate changes. 

References:

1.       Statements from Scientific and Scholarly Organizations. Retrieved from http://ncse.com/media/voices/science

2.       Theobald, D. (2012). “29+ Evidences for Macroevolution: The Scientific Case for Common Descent.” The Talk.Origins Archive. Retrieved from http://www.talkorigins.org/faqs/comdesc/

0 Comments

The Watchmaker Argument

4/15/2014

0 Comments

 
I felt compelled to write about creationism after seeing a high number of news articles (and well-publicized debates) about this subject over the last couple of months. More specifically, I would like to take the time to discuss Paley’s watchmaker argument for the existence of an intelligent designer, and explain why it is rife with cognitive biases and logical fallacies. 

I will tackle how it is incompatible with the theory of evolution in a later blog post dedicated to just that discussion. 

Is there a designer? Let us imagine that you are strolling down a beach on a warm summer’s night as a misty breeze is blowing in calmly from the sea. You decide to stop and gaze at the setting sun and marvel at how tranquil it looks as it begins to pierce the horizon. Now you begin to feel something hard and metallic between your toes. You look down to find a gold pocket watch at your feet – chain and all. You pick it up and observe the intricacies of the patterns etched softly into the gold, the smoothness of the watch face, and the small details of the watch’s hands. You turn it over and open up the back to reveal a complicated interplay of gears and cogs that fit perfectly together in such a way as to move the hands precisely. You surmise that it is quite an efficient and elegant design, made with a specific and rather apparent purpose – to accurately tell time – and made by a demonstrably skilled watchmaker. After further careful contemplation, you realize that you have never seen such a watch being made, nor do you know exactly how the designer made it, except that he made it with great skill, care, and patience. Lastly, you surmise that a natural order or process did not create the watch right there in the sand, whole cloth, irrespective of the designer. You know this because the watch is incredibly complex. No, the only order invoked in the watch’s formation was the watchmaker’s intelligence.

Still holding the watch you look back up to the setting sun; it has fallen deeper into the horizon by now. A thought crosses your mind: you know that the incredibly complex machine you hold in your hands, by necessity, was created by an intelligent designer – a watchmaker – with purposeful design. However, the setting sun in front of you, the whole of life, and the entirety of the universe are many orders of magnitude more complex and intricate than anything a watchmaker could ever produce. After a lot of thought on this apparent incongruity you arrive at the conclusion that the sun, life, the universe, and anything complex within nature, like the watch, also necessitates an intelligent designer who made these things with purposeful and orderly design [1].

This is the watchmaker argument in a nut-shell. It is solidly rooted in a teleological framework and it was first proposed by the Christian apologist and clergyman William Paley in his 1802 book Natural Theology. A teleological argument is an argument for the existence of an intelligent designer based upon the premise that there is evidence for this designer in the apparent design and purpose of nature [2]. Teleological arguments have been around long before Paley and his Watchmaker Argument, however his contributions to this philosophical framework are undoubtedly popular today as his argument is cited by many intelligent design proponents. There are a couple problems with this argument.  

Firstly, it is an argument by way of a false analogy. Put simply, the watchmaker argument goes: a complex watch, with its display of order, necessitates an intelligent designer; some phenomenon X (life, the universe, etc) is also complex with a display of order; therefore, phenomenon X necessitates an intelligent designer as well [3]. However, if we think about it, if two entities share a common trait, why then must they share other traits beyond what they already have in common? In other words, if the watch and the universe share the common trait of “complexity”, a trait derived from their perceived “order”, why then must they also share the trait of “being designed”? The answer is that they do not need to share other traits. This is patently obvious if we change the entities in the false analogy argument to something more familiar: green leaves and dollar bills.  

We can restate the argument as such: green leaves grow on trees; the US dollar bill is green, therefore the US dollar bill grows on trees. In this argument the two entities, the leaves and US dollar bill, share a common trait – they are both green – however, even though they share this trait, it does not follow that they must share others – the ability to grow on trees. Bringing it back around to the watchmaker argument, just because a watch and the universe share the common trait of “complexity” does not mean that they must share the trait of “being designed”.

Furthermore, the Scottish philosopher David Hume criticized teleological arguments saying that they are based upon our inherent experiences of objects [3]. He says that we can easily make the distinction between human-designed objects – watches, for example – and objects that are not made by humans, based upon a lifetime of experience with both classes of objects; we see and interact with both on a daily basis. Conversely, Hume says we cannot impart a purpose-driven design on the universe, for example, because we do not have experience with a range of other universes to compare ours to. 

When making any inference by way of inductive reasoning, it is not rational to make general conclusions from one specific isolated instance. For example, if we observed a single isolated instance of a white swan, it would be impertinent of us to claim then that “all swans are white”. The swan in front of us could be white for any number of reasons, some of which may not be shared by other swans. Based upon this sole example, the evidence that all swans are white is underwhelming at best. However, our conclusion would be more justified, or it could even be falsified, if we compared our observed instance with many other observed instances of swans. *In fact, not all swans are white. There is a species of black swan, Cygnus atratus, which is native to the southeast and southwest regions of Australia. I will return to this example of “all swans are white” in later blog posts.* After considering the swan example, we start to see that we cannot adequately impart a purpose-driven design on our universe without having any other universes to compare it to; we are limited to a sample size of one.

Lastly, this is all assuming that “order” is a well-defined and objective term in its own right, which it is not. This may get a little dense so I will do my best to explain. It is often said that an orderly universe is “finely tuned”; in other words, a universe in which the universal constants, such as gravitational attraction or the nuclear forces, are “calibrated” in an extremely precise manner. In turn, this calibration of the cosmic dials allows for the nuclei of atoms to remain intact, or the birth of countless stars in the bellies of stellar nurseries, or for the chemical processes needed for life to chug along. Conversely, if some of those constants were askew even by a tiny fraction, the universe would be unable to maintain the structural stability of atoms, or ignite suns, or harbor life. Knowing this, there must have been a designer that turned the cosmic dials in a precise manner to how they are now, right?

Not so fast. As humans, we impose the idea of finely-tuned “order” onto the universe simply because we exist within it, and as a consequence, we are able to impart our conscious observations onto it. In this sense, “order” has to be the case because if it wasn’t, and we lived in a universe where most of the constants were incompatible with the existence of life, we wouldn’t exist to declare the state of the universe one way or the other. Furthermore, if the universe ran on a slightly different set of universal constants, that otherwise did not hinder the ability of humans to exist, we would end up calling that universe “orderly” and “fine tuned” instead. Ultimately, we are inherently biased to think that whichever universe we exist in, is a universe of “order” simply because we exist within it to judge it as such. We couch “all that which exists” and place it into the word “orderly”, and again, we are limited to a sample size of one. I will end this post with Ayn Rand’s words on the subject, “Our whole concept of order comes from us observing reality, and reality has to be orderly because it’s the standard of what exists.”

References

1.      Paley, W. (n.d.). The Watchmaker Argument. Natural Theology, ch. 1-3. Retrieved from         http://homepages.wmich.edu/~mcgrew/PaleyWatch.pdf

2.      Ratzsch, D. (2010, October 3). Teleological Arguments for God’s Existence. The Stanford Encyclopedia of Philosophy. 
Zalta E. N. (Ed.). Retrieved from http://plato.stanford.edu/entries/teleological-arguments/

3.      Wallace, M. (n.d.). The Teleological Argument. Retrieved from Lecture Notes Online Website: http://www.unc.edu/~megw/TeleologicalArg.html


0 Comments

Maybe All This

4/9/2014

2 Comments

 
Welcome to the blog! I have been mulling over a few ideas for topics to kick off this blog when I saw today that an excellent science communicator and YouTuber, going by the name c0nc0rdance, made a video spotlighting a poem written by the Polish poet and Nobel Laureate Wisława Szymborska-Włodek. The poem is called "Maybe All This" and it was first published in the New Yorker in 1992 where it soon became one of her most well known pieces. The poem itself is excellent, and the author herself is inspirational; so, I can think of no better way to start off this blog than to present to you her poem. 
MAYBE ALL THIS

Maybe all this 
is happening in some lab?
Under one lamp by day
and billions by night?

Maybe we're experimental generations?
Poured from one vial to the next,
shaken in test tubes,
not scrutinized by eyes alone,
each of us separately 
plucked up by tweezers in the end?

Or maybe it's more like this:
No interference?
The changes occur on their own
according to plan?
The graph's needle slowly etches 
its predictable zigzags?

Maybe thus far we aren't of much interest?
The control monitors aren't really plugged in?
Only for wars, preferably large ones, 
for the odd ascent above our clump of earth,
for major migrations from Point A to Point B?

Maybe just the opposite:
They've got a taste for trivia up there?
Look, on the big screen a little girl
is sewing a button on her sleeve.
The radar shrieks,
the staff comes at a run.
What a darling little being
with its tiny heart beating inside it!
How sweet its solemn 
threading of the needle!
Someone cries, enraptured, 
Get the Boss,
tell him he's got to see this for himself!
2 Comments

    Archives

    April 2018
    August 2015
    January 2015
    April 2014

    Categories

    All
    Intelligent Design
    Logic
    Philosophy
    Poetry
    Watchmaker Argument

    RSS Feed

Powered by Create your own unique website with customizable templates.