It’s not like I don’t feel like I’m a Jew. I feel like I don’t have a choice about being a Jew. Your cultural heritage isn’t like a suitcase you can lose at the airport. I have no choice about it. It is who I am. I can’t choose that. It’s a fact of me. But even when I was 14 or 15, it didn’t make that much sense to me that there was this Big Daddy who created the world and would act so crazy in the Old Testament. That we made up these stories to make ourselves feel good and explain the world seems like a much more reasonable explanation. I’ve tried to believe in God but I simply don’t.
Granting dying patients the power to determine when their lives will end has long been a serious point of contention with some American religious groups who view these right to die laws as government embracing a “culture of death.” Well-known right to die activists such as Jack Kevorkian have countered that religious ethics should not subvert sound medical reasoning. As of now, the argument against establishing right to die laws remains the dominant American position as only six states and the District of Columbia currently allow physicians to prescribe medications that hasten death. Another, more blunt way to put it, is that a theological belief is forcing millions of families and individual Americans to endure needless suffering that most of us spare our pets.
On its face, the religious objection to right to die laws is based on an otherwise morally praiseworthy worldview that all human life is sacred. Understanding how this seemingly positive belief became the chief impediment to ending so much needless human suffering presents a great lesson in the underlying conflict between science and dogmatic belief.
To be clear, I do not think this conflict needs be a zero-sum game. Indeed, the Constitution provides a great blueprint for how religious faith and science can interact in the same space to overall mutual benefit. Moreover, a strong argument can be made that a constant state of tension is how our market of ideas should function under. That said, I do agree with the critics of dogma such as neuroscientist and author Sam Harris in one very important respect; the main problem with dogma, no matter how benign, is that it is unresponsive to new evidence and discoveries.
The practical issue is the period in which most religious scripture takes place is centuries apart from the time period when modern science came about. Therefore, it is utterly impossible for scripture to take into account the evidence that modern science has produced. This places literal, dogmatic interpretation of spiritual text often in conflict with readily provable realities that modern science has revealed. For instance that the earth is billions, not thousands of years old. Often times, the descriptive conflict between religious dogma and modern science does not bear any direct impact on the everyday lives of most. When the subject matter spills into medical ethics however, the debate can have very real consequences.
I found the following quote from Bob Seidensticker to be an excellent explanation of Evangelical presuppositional apologetics. If you have ever had a discussion with someone who is a presuppositionalist, you know how frustrating such discussions can be. This quote doesn’t provide an answer to presuppositionalism as much as it shows how presuppositionalists think. You will likely never win an argument about the existence of God with a presuppositionalist since they reject an evidentiary approach when it comes to the existence of God and supernatural nature of the Bible. It does, however, help to know HOW such people think. This will keep you from wasting hours talking with someone about God or the Bible, only to have them default to “the Bible says.” End of discussion.
Christian use of “If” Christians can also make daring use of this word, but it’s a different kind of daring. Here are some examples where they conjure up the supernatural with an If.
If God exists, it makes not only a tremendous difference for mankind in general, but it could make a life-changing difference for you as well. —William Lane Craig
If Jesus was literally God incarnate, and if it is by his death alone that men can be saved, and by their response to him alone that they can appropriate that salvation, then the only doorway to eternal life is Christian faith. —John Hick
If Jesus rose from the grave, that’s the most important event in history. It proves Jesus is who He said He was, that Christianity is true, that you will be resurrected and brought before God to account for your crimes against Him. —Alan Shlemon
If God had the power necessary to create everything from nothing [that is, create the universe], he could probably pull off the miracles described in the New Testament. —J. Warner Wallace (Sometimes the if is assumed. For example, the atheist raises the Problem of Evil, and the apologist replies, “[If we first assume God,] Who are you to question God?”)
Perhaps you can see the problem. Yes, if that amazing and unevidenced claim about God or Jesus is true, then your conclusion holds, but why would you think it would? It’s like saying, “If Santa exists, I’ll get lots of presents” or “If friendly aliens are among us, they’ll give us lots of cool technology” or “If I can speak to the dead, I will gain great wisdom.” The conclusion might logically follow, but why accept the ridiculous if premise? No reason is given.
In Christians’ Alice-in-Wonderland logic, the premise is the conclusion. The four quoted examples above simplify to “If God exists, then God exists.” The Christian apologist could cut to the chase, declare that God or Jesus exists, claim victory over the atheist, and be done with it, but then of course they admit the sleight of hand. The second half of the “If God exists . . .” statement is window dressing compared to the fundamental claim that God exists. The conclusion was buried in the premise all along.
This is the Hypothetical God Fallacy. It’s a fallacy because no one interested in the truth starts with a conclusion (God exists) and then arranges the facts to support that conclusion. That’s backwards; it’s circular reasoning. Rather, the truth seeker starts with the facts and then follows them to their conclusion. Christians don’t get an exemption, and they must do it the hard way, like any scientist or historian, showing the evidence that leads unavoidably to the conclusion.
The notion of “intelligent design” arose after opponents of evolution repeatedly failed on First Amendment grounds to get Bible-based creationism taught in the public schools. Their solution: Take God out of the mix and replace him with an unspecified “intelligent designer.” They added some irrelevant mathematics and fancy biochemical jargon, and lo: intelligent design, which scientists have dubbed “creationism in a cheap tuxedo.”
But the tuxedo is fraying, for intelligent design has been rejected not just by biologists but also by judges who recognize it as poorly disguised religion. Nevertheless, its advocates persist. Among the most vocal is Michael J. Behe, a biology professor at Lehigh University whose previous books, despite withering criticism from scientists, have sold well in a country where 76 percent of us think God had some role in human evolution.
Like his creationist kin, Behe devotes his time not to giving evidence for intelligent design but to attacking evolutionary biology. As Herbert Spencer said, “Those who cavalierly reject the Theory of Evolution, as not adequately supported by facts, seem quite to forget that their own theory is supported by no facts at all.” But Behe’s theory, promulgated by the Discovery Institute, Seattle’s intelligent-design organization, does demand support. Who, exactly, is the designer, and what evidence is there that this designer makes nonrandom mutations? Is the designer an immaterial god, in which case we need to know how this god violates the laws of physics by causing mutations, or is the designer material, like a space alien, in which case we must understand the physical methods whereby aliens change our DNA?
And what is an example of a designed mutation? (Behe is silent here.) Since humans are placed in the same family as other great apes (Hominidae), Behe’s theory predicts that we arose without a designer’s intervention. But here he backpedals, asserting that there are “excellent reasons to suspect those differences [between humans and other apes] are well beyond Darwinian processes.” Sadly, he doesn’t give these reasons, but I’d guess they stem from the Christian belief that Homo sapiens is a special creation of God. Such ad hoc claims, derived from religion, explain why intelligent design has been deemed by the courts as “a mere re-labeling of creationism, and not a scientific theory.”
In 1998, the Discovery Institute drafted the “Wedge Document,” a secret plan (leaked in 1999) to spread Christianity in America by teaching intelligent design and fighting materialism. One of the plan’s 20-year goals was “to see intelligent design theory as the dominant perspective in science.” Well, now it’s 20 years on, and despite the efforts of Behe and other neo-creationists, intelligent design has been discredited as science and outed as disguised religion. It’s no surprise, then, that “Darwin Devolves” was published by HarperOne, the religious, spiritual and self-help division of HarperCollins.
What statistics are available on cases of failed abortions in which a baby is born alive? How often does this happen?
There is some limited data on babies born alive as the result of an abortion procedure, but it’s unclear what the medical circumstances were in each of these cases. There is more extensive data on when abortions are performed. We’ll go through the available numbers.
First, in terms of a baby’s viability — the ability to survive outside the womb — one 2015 study in the New England Journal of Medicine on preterm births said: “Active [lifesaving] intervention for infants born before 22 weeks of gestation is generally not recommended, whereas the approach for infants born at or after 22 weeks of gestation varies.” The study noted the “extremely difficult” decision on whether to use treatment for infants “born near the limit of viability,” saying that while in some cases treatment is clearly indicated or not, “in many cases, it is unclear whether treatment is in the infant’s best interest.”
The study looked at the cases of 4,987 infants “without congenital anomalies,” or birth defects, born before 27 weeks gestation. It found that 5.1 percent of babies born at 22 weeks gestational age survived and 3.4 percent survived “without severe impairment.” Several weeks further into gestation, at 26 weeks, 81.4 percent of babies survived, 75.6 percent without severe impairment.
Abortions in such later stages of pregnancies (which typically are 38 to 42 weeks full term) could be performed because of congenital anomalies, but that study provides some sense of when a fetus without birth defects could be viable and when decisions on medical interventions could be made.
Late-term abortions are rare. The Centers for Disease Control and Prevention found that 1.3 percent of abortions in the U.S. were performed after 21 weeks gestational time, according to 2015 data. The CDC’s report showed that 65 percent of abortions that year occurred in the first eight weeks of pregnancy.
Forty-three states have banned “some abortions after a certain point in pregnancy,” according to the Guttmacher Institute, which researches reproductive health issues.
What about abortions that result in a live birth? One CDC report on death certificates for infants for 2003 to 2014, showed “143 deaths involving induced terminations” of pregnancies during that 12-year period, 97 of which “involved a maternal complication or, one or more congenital anomalies.” The data “only include deaths occurring to those infants born alive; fetal deaths (stillbirths) are not included.”
The CDC notes that the 143 number could be an underestimate of induced terminations of pregnancies. In looking at the data, the CDC found some cases where it was unclear whether a pregnancy termination was induced or spontaneous. In such cases, if congenital anomalies and maternal complications also were involved, the CDC assumed those were spontaneous terminations, due to the “strong association between severe congenital anomalies or maternal complications and premature labor and birth.” In other words, the CDC assumed such cases were premature labor as opposed to a decision to induce labor or end the pregnancy.
On Feb. 5, during the State of the Union address, President Trump implied that women like me executed our babies after birth.
I am an obstetrician and gynecologist who has delivered newborns who could not live, either because they were extremely premature or had birth defects. I have provided abortion care for women after 24 weeks gestation faced with similar outcomes who chose a surgical abortion over a vaginal delivery.
And I also delivered a son who was born to die — my own son.
According to the president, we are executioners.
If you are going to accuse me of executing my child, then you need to know exactly what happened. It’s not a pleasant story and the ending is terrible. I wouldn’t blame you for not wanting to read it. But you need to know the truth, because stories like mine are being perverted for political gain.
It pains me to remember. And yet, it is the only memory of my son, and so even though it cuts, I keep it close.
I was pregnant with triplets and at 22 weeks and three days, my membranes ruptured — that is, my water broke, far too early. I knew it was catastrophic. Almost no baby born before 23 weeks can survive.
With the knowledge that I would probably be a parent for only a few minutes, I headed to the hospital. I told my husband at the time that it would be all right, that maybe I was wrong.
I lied. It was easier on me.
After we consulted with a high-risk obstetrician and a neonatologist, I heard the dismal news I had expected: The survival rate for male triplets at 22 weeks and three days was less than 1 percent.
And so I waited. I waited to bestow the names I had so carefully chosen on three boys who seemed destined to die at birth.
For a day nothing happened. That was cruel because I began to hope that maybe I could hang on for a few weeks and maybe one or more would survive. I couldn’t help but indulge in the fantasy. And I resented that hope because I knew the worst day of my life was almost here.
I know other parents in similar situations also cling to hope. I have delivered those women; sometimes their wrenching sobs push their child who is born to die into the world. Maybe their child had a lethal birth defect. Maybe their child was extremely premature, like my Aidan. There are a lot of ways a newborn can be born to die.
After a fitful night of sleep at the hospital — because when you know Death is standing at the doorway waiting for your baby, you don’t sleep well — I got up to use the bathroom.
And then, all alone, I realized I was delivering. There was no time to cry out. I stood alone in the hospital bathroom and delivered my own son. He fit in my hands.
And then a nurse parted everyone and brought him to me wrapped in a blanket. He was dying, she said. Did I want to hold him?
I was being poked and prodded. Needles piercing my skin. Drugs for sedation. I was being held down (I don’t resent that; I just couldn’t cooperate, and I know it was an emergency and everyone was really trying). A speculum was also in my vagina, opened wide so a doctor — a friend of mine trying not to cry — trimmed Aidan’s umbilical cord dangling from his placenta that was still inside my uterus.
I tell myself it was all those things that prevented me from holding him, but I know the truth.
I wasn’t brave enough.
If I held him and saw him die, then I would know exactly what I was going to face if the other two delivered (ultimately, my other two sons survived).
As Aidan’s parents we had decided that invasive procedures, like intravenous lines and a breathing tube in a one-pound body, would be pointless medical care. And so, as we planned, Aidan died.
If you have the time, please read Dr. Gunter’s heartbreaking article in its entirety. It certainly casts a different light on pregnancy complications and late-term abortions; a light that anti-abortionists don’t want people to see.
The right question for the [U.S. Supreme] court is whether a religious symbol on public property endorses one religion over others. The Peace Cross clearly does….At a time when Americans subscribe to a wide variety of religious beliefs — or none at all — it’s vital for government to be religiously neutral.
The egomaniacal and rapacious drives of a molester who blots out all sense of right and wrong, brutally disregarding the pain he is causing children, have often found a parallel in churches bent on protecting themselves at the expense of thousands of victims. That disregard is a malignancy in the church . . .
If religion or any institution depends on the sexual exploitation or subordination of children or women, then it is better that such institutions should cease to exist. If it is a question of the survival of the institution of the church versus the survival and safety of children, then our allegiance clearly must be with children.
The entire concept of a “New Atheism movement” comes from defensive defenders of religion. I think of it not as a movement but as the overdue examination of an idea: Does a supernatural deity exist, and should our morality and politics be shaped by the belief that it does? For various reasons—the intrusion of theo-conservatism during the presidency of George W. Bush, the rise of militant Islam, an awareness of the psychological origins of supernatural beliefs, sheer coincidence—a quartet of books appeared within a span of two years, and pattern-spotters invented a “New Atheist Movement.” (I would not downplay coincidence as the explanation—random stochastic processes generate clusters of events by default.) Judged by degree of belief, the “new atheism” is not only not dead but it is winning: every survey has shown that religious belief is in steep decline, all over the world, and the drop off is particularly precipitous across the generations, as compared to just drifting with the zeitgeist or changing over the life cycle. This is reflected in laws and customs—homosexuality is being decriminalized in country after country, for example. These trends are masked in the public sphere by two forces pushing in the opposite direction: religious people have more babies, and religious communities turn out in elections and vote in lockstep for the more conservative candidate. If the “new atheist” message of Christopher Hitchens et al. was “Atheists should have more babies,” or “Atheists should form congregations and vote en masse for the same candidate,” then yes, it was an abject failure. But if it is “The evidence for a supernatural being is dubious, and the moral norms of legacy religions are often pernicious,” then it is carrying the day, or at least riding a global wave.
I’ve always been skeptical about the utility of identifying as an “atheist,” because it rarely seems helpful to heap the false assumptions that surround this term upon one’s own head. For this reason, I’ve never been eager to wear the label “new atheist” either.
However, there was something genuinely new about the “new atheism.” The publication of our four books in quick succession moved the conversation about faith and reason out of rented banquet halls filled with septuagenarians and brought it to a mainstream (and much younger) audience. The new atheists also made distinctions that prior atheists tended to ignore: For instance, not all religions teach the same thing, and some are especially culpable for specific forms of human misery. We also put religious moderates on notice in a new way: These otherwise secular people who imagine themselves to be on such good terms with reason are actually abetting the forces of theocracy—because they insist that everyone’s faith in revelation must be respected, whatever the cost.
The new atheism has not disappeared. It has merely diffused into a wider conversation about facts and values. In the end, the new atheism was nothing more than the acknowledgement that there is single magisterium: the ever-expanding space illuminated by intellectual honesty.
With few exceptions, most scientists and philosophers think that morality is at bottom based on human preferences. And though we may agree on many of those preferences (e.g., we should do what maximizes “well being”), you can’t show using data that one set of preferences is objectively better than another. (You can show, though, that the empirical consequences of one set of preferences differ from those of another set.) The examples I use involve abortion and animal rights. If you’re religious and see babies as having souls, how can you convince those folks that elective abortion is better than banning abortion? Likewise, how do you weigh human well being versus animal well being? I am a consequentialist who happens to agree with the well-being criterion, but I can’t demonstrate that it’s better than other criteria, like “always prohibit abortion because babies have souls.”
Principles of right and wrong guide the lives of almost all human beings, but we often see them as external to ourselves, outside our own control. In a revolutionary approach to the problems of moral philosophy, Philip Kitcher makes a provocative proposal: Instead of conceiving ethical commands as divine revelations or as the discoveries of brilliant thinkers, we should see our ethical practices as evolving over tens of thousands of years, as members of our species have worked out how to live together and prosper. Elaborating this radical new vision, Kitcher shows how the limited altruistic tendencies of our ancestors enabled a fragile social life, how our forebears learned to regulate their interactions with one another, and how human societies eventually grew into forms of previously unimaginable complexity. The most successful of the many millennia-old experiments in how to live, he contends, survive in our values today.
Drawing on natural science, social science, and philosophy to develop an approach he calls pragmatic naturalism, Kitcher reveals the power of an evolving ethics built around a few core principlesincluding justice and cooperation but leaving room for a diversity of communities and modes of self-expression. Ethics emerges as a beautifully human phenomenon permanently unfinished, collectively refined and distorted generation by generation. Our human values, Kitcher shows, can be understood not as a final system but as a project the ethical project in which our species has engaged for most of its history, and which has been central to who we are.
Human skin color reflects an evolutionary balancing act tens of thousands of years in the making. There’s a convincing explanation for why human skin tone varies as a global gradient, with the darkest populations around the equator and the lightest ones near the poles. Put simply, dark complexion is advantageous in sunnier places, whereas fair skin fairs better in regions with less sun.
That may seem obvious, considering the suffering that ensues when pale folks visit the beach. But actually, humanity’s color gradient probably has little to do with sunburn, or even skin cancer. Instead, complexion has been shaped by conflicting demands from two essential vitamins: folate and vitamin D. Folate is destroyed by the sun’s ultraviolent (UV) radiation. Whereas the skin kickstarts production of vitamin D after being exposed to those same rays.
Hence, the balancing act: People must protect folate and produce vitamin D. So humans need a happy medium dosage of sun that satisfies both. While the intensity of UV rays is dictated by geography, the amount actually penetrating your skin depends on your degree of pigmentation, or skin color.
That’s the basic explanation, proposed in 2000 and fleshed out since by anthropologist Nina Jablonski and geographer George Chaplin.
A range of skin colors evolved at different times, in different populations, as human spread across the globe. In addition to these genetic biological changes, groups have also developed cultural adaptations to deal with variable sunlight. For instance, we can consume diets rich in folate and vitamin D. We can also build shelters, wear clothing and slather sunscreen to block UV rays.
Skin color is one of the most obvious and (literally) superficial ways humans differ. But the evolutionary story behind this variation is shared: Over the course of human evolution, complexion evolved from light to dark to a continuous gradient, mediated by geography, genes and cultural practices.