• A Modest Proposal?

    A critique of a recent research paper wherein the authors argue that AI generated content is more environmentally friendly than human writing or art

    When I began teaching organic chemistry, I watched whatever open course lectures I could find for guidance and inspiration. The lecture series that had the most profound effect on my approach to science education was Professor McBride’s Freshman Organic Chemistry series from Yale’s open courses. The thing that stuck with me the most from the lectures was his suggestion that when thinking scientifically, one’s first question should be ‘How do you know?”, and the second should be “Compared to what?”

    “Compared to what?” is the question that came to mind when I recently stumbled across a study that claims that generating writing and images using artificial intelligence is hundreds to thousands of times more carbon efficient than having a human write or paint the same amount of content. The article, titled The carbon emissions of writing and illustrating are lower for AI than for humans1, offers a comparative analysis of the carbon emissions created by popular AI systems such as Midjourney or ChatGPT versus the carbon cost of a human doing the same task. They factor in things like the cost of training the AI (a one-time event) and then find the carbon cost per page of text or image. And this is compared to the averaged yearly carbon footprint of a human divided by the time it would typically take a human author or artist to produce a similar result (one hour per page of writing, for example). A human, just by existing in society, produces somewhere between roughly 100 to 1000 times more carbon dioxide in an hour than ChatGPT produces in the short time it takes to generate a page. Therefore, having AI do the writing is 100 to 1000 times more energy efficient than doing it ourselves.

    It was at this point that I realized that I could dismiss this paper’s conclusion out of hand. Not because I found any flaw in how they got their numbers, but because their conclusion contains an implication that renders it absurd. Let’s assume that they are correctly accounting for all the energy costs of AI generation versus the average carbon cost of a living person. Their conclusion, that “AI has a substantially lower carbon footprint than humans engaged in the same task” is then technically true, but what is it supposed to demonstrate? The fatal flaw in their reasoning is that they are comparing the very existence of the human writer to the operational cost of the AI as if the former will go away if we choose the latter.

    There are several obvious problems with this framing. First of all, AI doesn’t prompt itself, so a human’s carbon footprint ought to be involved in the generation as well. The authors, to their credit, recognized and even calculated an estimate of the carbon produced simply due to human prompt writing, but they conspicuously failed to add this to the cost of AI generation. Had they done so, AI generation would have still been more efficient, but an order of magnitude less so (roughly 10 to 100 times more efficient rather than 100 to 1000 times more efficient). Putting this aside, however, the claim that it is more efficient to have writing or art produced by AI than by a human creator falls apart when we consider what efficiency means.

    The Oxford dictionary defines efficient as: “(especially of a system or machine) achieving maximum productivity with minimum wasted [emphasis added] effort or expense.” Other definitions similarly focus on the avoidance of waste as central to the definition of efficiency. The authors explicitly argue that their results imply that AI generation is more efficient than human creative work when they argue: “In sum, due to its substantially lower impact than humans at at least two important tasks, AI can play an important role in various sectors of society without, at present, running afoul of problematic carbon emissions. While the carbon footprint of AI is nontrivial, the footprint of humans doing the same work is far greater, and should not be discounted in the assessment of AI.”

    This is the crux of it. If we are comparing two modes of production and deciding which is more efficient, the comparison only makes sense if we have a choice of one or the other, not both. That’s the way you avoid waste. But the waste we would need to avoidfor AI to be more efficient isn’t extra human labor, it’s an extra human, existing! If the people are still alive and doing other things, they are going to be producing more or less the exact same amount of carbon dioxide. Therefore, the carbon footprint of AI, as modest as it may be compared to that of human beings, is still an additional cost. Presumably the authors are not suggesting that human writers and artists are to be terminated to make way for the machines. So all of their research and analysis results in a moot point (don’t worry, though, the authors disclose that they used AI to help them write the paper, so hopefully not too much of their labor was wasted).

    Another way to illustrate the central fallacy here is to point out that, in a effort to show the full range of the carbon cost of human labor, the authors used both the calculated carbon footprints of a person from the US and India to show a range of per capita carbon impact, from high to low, respectively. ChatGPT is calculated to be 130 times more carbon efficient at writing than a resident of India, and 1100 times more efficient than a US based writer. But this shows the absurdity of using carbon footprints for comparison. People in the US have a higher average carbon footprint than people in India for reasons that have nothing at all to do with the work of writing or illustration. Imagine for a moment that all writers in the US gave up their trade and all of the professional writing was outsourced to India. Assuming the former writers go on living (and making a living), it would be foolish to expect this reshuffling alone to reduce carbon emissions, let alone by a factor of ten. Our carbon footprint is higher in the US because we drive and fly more and consume more electricity as a country. And now our AI data centers have pushed it even higher.

    This paper is a classic case of flying too close to the sun, because the data they present does support some less dramatic claims (again, I’m assuming their numbers are sound for the sake of argument, as this is not my field of expertise). For instance, they compared the carbon generated by a laptop or desktop computer for the hour it would take a human writer to produce a page of writing and found that AI uses significantly less energy, mostly due to producing the content faster. This is a real potential efficiency gain, though much more modest in its magnitude than AI versus a living US resident. But for the efficiency gain to be realized, the prompter would need to shut down their computer immediately after prompting and go touch grass or something for an hour. Unfortunately, the premise of AI is that it will make us all more productive, which will often mean that the hour freed up from writing will just be used to prompt even more, compounding productivity. Or just as likely, the human writer will spend the liberated hour aimlessly scrolling social media.

    The authors make no effort to hide their pro AI bias, and they advocate for AI adoption throughout the paper. In fact, they present their paper, which they disclose was written in the first draft by AI, as a piece of praxis. Should their conclusion hold, after all, using AI to write one’s research manuscripts instead of writing it oneself is another way to go green. It almost becomes a moral imperative (“Oh, you still write yourself? Guess someone doesn’t care about the climate.”) But their conclusion does not hold. The authors, as people living in society, produced the same carbon footprint they would have produced without AI. No carbon waste was avoided there. And they may have saved some computer usage by drafting the manuscript with AI, but by their own admission they edited and revised the manuscript quite extensively. Presumably on their computers. And I humbly suggest that if they had taken the hours saved by AI and used them to think this whole thing through a little more, they might have retooled their argument before putting it out into the world. But by giving the pro AI crowd a peer reviewed research paper to bandy around as a faulty argument against the environmental concerns around AI, they caused many people like me to dedicate considerable kilowatt-hours to pointing out the glaring error in their reasoning. Unintended consequences are the order of the day in the AI age.

    (Like everything on this blog, this article was written entirely without the use of AI. Any flaws in the reasoning are attributable to me alone.)

    1. Tomlinson, B., Black, R.W., Patterson, D.J. et al. The carbon emissions of writing and illustrating are lower for AI than for humans. Sci Rep 14, 3732 (2024). https://doi.org/10.1038/s41598-024-54271-x

  • Prompting Learners

    I’ve had a project idea on the back burner for some time for an in-class module centered around a Socratic dialogue. The idea is that a student volunteer would interact with me as I delivered the lesson (I had the mechanism of second order nucleophilic substitution in mind). I want to turn a passive lecture into an active dialogue. The inspiration to use dialogue as a didactic tool I owe to my advisor, who put me on to Galileo’s classic Dialogue Concerning the Two Chief World Systems, where Galileo defends the heliocentric Copernican view of the cosmos against the ancient geocentric Ptolemaic view by way of a fictional dialogue between the characters Salviati and Simplicio (you can guess who defends which view by their names).

    I’ve experimented with teaching dialogues before. Several years ago I wrote a loose script where I explained the surprising role of entropy in driving both the hydrophobic effect and the spontaneous folding of proteins through a conversation with a skeptical interloper who pushes back and asks tough questions. As in Galileo’s dialogue, we both make our case to an initially neutral third person who stands in for the audience. Two friends from a philosophy circle I belonged to played these other roles and together we performed it as an optional extracurricular event open to my organic chemistry students (three students out of two hundred showed up to watch it, bless them).

    My interest in developing Socratic dialogue for organic chemistry was rekindled and took on new urgency due to the unfolding A.I. crisis in education. There is an administrative push underway to reconfigure education into a crash course to train students to effectively prompt ChatGPT for answers and dress up and regurgitate the output. Driven either by a desire to ride the trend or by fear of being left behind, many instructors are acquiescing and ceding their teaching role, becoming glorified assistants to the A.I. The irony is that A.I. is effectively replacing the mental work of both teacher and student. In my view, this is as misguided as a trainer advising their client to send a robot to the gym to do the workout instead of them. It’s a confusion of roles. Instead of teaching students how to prompt an A.I. to effectively explain scientific concepts, we should be prompting the students to create their own explanations.

    At its best, the Socratic method succeeds by turning the student into the teacher. They must be supplied with the building blocks of knowledge and a sound theoretical blueprint that shows how it all fits together, and this is the teacher’s job. But the student must synthesize this knowledge into a framework that makes sense to them. They must be able to convey it to others, receive critique and reevaluate. The teacher can help with a leading question here or there, but the student must realize the connections for themselves. This work cannot be outsourced to a third party.

    I’ll be honest and admit that as much as I detest and resent A.I., I’m finding its ability to stochastically extrude explanations of scientific concepts to be more impressive with every climate-melting model update. Maybe this means that it will replace me eventually. Apparently, there will be a new “study mode” feature for ChatGPT that will throw back some Socratic questions at the user. It’s as if they realized and are trying to correct for the fact that their replacement for a teacher also replaces the student. But the student can never be replaced; there is no education without a learner. We need to make sure that we are prompting the learners, giving them feedback to improve their output and build their reasoning capacities, and not simply preparing them to manage their impending replacement.

    ***************************************************************************

    A few prompts I gave to my students at the end of this semester:

    Draw me a four-panel comic explaining the SN2 reaction mechanism using curved arrows. (quite a lot of theoretical knowledge is contained in this deceptively simple looking mechanism, from structure-reactivity relationships to molecular orbital theory to chemical kinetics and the transition state theory, to molecular geometry and chirality. Incidentally, when I was young I used to draw cartoon character versions of protons and electrons, imagining them as the Autobots and Decepticons of the atomic world.)

    Come up with a workflow to explain the process of predicting the major product of the reaction of 2-chlorohexane with sodium ethoxide in ethanol at 55 degrees Celsius. The workflow should address how to predict reactivity, regioselectivity and stereoselectivity, if applicable. Write this in the style of Hunter S. Thompson. (because why should ChatGPT have all the fun?)

    And finally, Write a poem in the style of Emily Dickinson explaining how many HNMR signals will appear for cyclohexanol and why. (just trolling them a little at this point.)

  • I recently bought a chessboard so that I can play against my young daughter and we can learn the game together. In a previous essay on chirality (right/left asymmetry) I used the example of boxing to describe the possible combinations of two chiral objects. Looking at my new wooden chessboard, it occurred to me that the game of chess provides a more ready and accessible visual explanation of this phenomenon.

    The image above is a chess board set up in the standard way where the queen goes on her own color. Notice first that if not for the king and queen being two different pieces, the game would be very symmetrical. With two kings and no queen, or vice versa, we would be able to cut the board into two equal halves both crosswise and lengthwise (we will ignore the black/white symmetry breaker throughout this discussion for simplicity).*

    But the fact that there is a king and a queen means that we can arrange them in two distinct directions relative to the player. The king side could either be to the right or to the left of the queen side. The convention of putting the queen on her own color yields a curious result (as seen above): white and black mirror each other but the king side is to the right for white, while for black the king side is to the left. Remember that the king side is relative to the player’s point of view. So if you are seated behind black, you will see your king to the left of your queen. If you are seated behind white, your king is to your right. Viewed from above, however, the kings and queens proceed along the same direction.

    In organic chemistry we have a term for this kind of object: a meso compound. The official definition from the International Union of Pure and Applied Chemistry for a meso compound is “the achiral member(s) of a set of diastereoisomers which also includes one or more chiral members.” To put this in plain terms, it is, in its simplest form, an object with a right and left handed side brought together so that each side mirrors the other side. Put your right and left hands together in the prayer position and you have formed a meso object. In chemistry, “meso compound” applies only to molecules that exhibit this specific kind of symmetry, but I see no reason not to use it to describe all manner of objects in the world.

    A meso object is, as per the official definition, always a member of a set of possible stereoisomers that include chiral enantiomers. This means that we can rearrange the peices on the board to form a ‘right handed’ chessboard or a ‘left handed’ one. In the image above, the kings and queens have been set so that the king side is to the right from both players’ perspectives. The chessboard is now a chiral object, with both halves having the same right handed ‘turn’. By definition, every chiral object has an enantiomer, and the double-left handed counterpart is shown below.

    It is important to point out that there is no inherent difference in the mechanics of play in the two chiral chess boards (right and left) above. But there is an immediate and consequential difference between either of them and the standard ‘meso’ chess board. That difference is that in the chiral setups, each king faces the opposite queen in the same column (or file). In the meso setup, the kings face each other, and the queens face each other. Whenever a pair of stereoisomers differ in their internal mechanics, we call them diastereomers. A pair of chiral objects are enantiomers, and they will always have the same internal mechanics, with the only difference being that the parts are arranged into either a right handed or a left handed turn.

    (Chess board images made with lichess.org)

    *Note: there is yet another interesting symmetry element, which is the orientation of the board to the players. Note that from the perspective of both players, the white corner could either be on the right side of the board or, by rotating the board 90 degrees before setting it, to the left. It is standard convention to orient the board to the player so that the white corner square is to each player’s right side. This results in the king side being to each player’s right as shown in the first two images when the board follows the queen on her own color convention. Orienting the board so that the white corner square in to each player’s left (by rotating the board by 90 degrees) would result in the king side being on the left of the queen side when the queen is on her own color. Nothing in this essay turns on this difference. The difference it makes is that, for example, the king side rooks could either be on their own color (in the standard setup) or on the opposite color, depending on the rotation of the board.

  • Ship of Theseus

    In the physical sciences, experiments are designed to potentially falsify some hypothesis, which is an educated guess about some state of affairs in the physical world. Philosophy, in contrast, proceeds by way of thought experiments. A thought experiment is some imagined situation or puzzle that is designed to test the boundaries of a definition, a concept, or a system of ideas.

    In classical antiquity, one of the most enduring thought experiments is known as the Ship of Theseus. In ancient Greece, the story goes, a ship meant to commemorate the voyage of the mythological hero Theseus was maintained by the people for a long period of time. Whenever some of the oak planks started to rot, they were replaced by new wood, and so after some years, very little of the original material of the ship remained. The question is, if every part of the ship has been replaced, is it still the same ship that made that first famous voyage? Or is it at this point a new and completely different ship?

    This thought experiment is interesting because the parts of the ship are replaced little by little. If the original ship was just burned one day, and afterwards a brand-new ship just like the original was constructed, we would call that new ship a mere replica. It’s the gradual replacement of parts that makes this question something worth thinking about, because it mirrors the human condition. We are all changing little by little, and yet we consider ourselves to have a stable personal identity over time. This isn’t just about character growth or metaphorical change either, we are physically changing. The cells, biomolecules and fluids of the human body are constantly being changed out and refreshed. It is likely that relatively very few of the atoms currently making up your body have been there from early on. We are, all of us, walking and breathing Ships of Theseus.

    Are we then completely different people then we were in the past? Sometimes when editing a video essay, rather than re-record to fix a mistake, the creator will add a voice over, joking “Hey, this is future me. I just wanted to correct the record on something that past me said right here.” And people often seriously say things like “I’m a completely different person now.” But it’s clear that we don’t really believe this. For one thing, you may have grown and changed as a person since you were younger, but “present you” still feels entitled to the returns of an investment that “past you” made twenty years ago. I may cringe at some of my old artwork as I improve over time, but if I hung a painting on the wall two years ago, I still want to be able to say that I, and not some other person who no longer exists, am the artist. Similarly, we hold each other accountable and make amends for wrongs we committed in the past. Perhaps we can say that we remain the same person on some level, but it is in the nature of people to change over time by incorporating little pieces of their environment into themselves through the simple steady process of living.

    A final note on thermodynamics. There are three classifications of systems: open, closed, and isolated. All living things are open systems, meaning that we exchange both matter and energy with our surroundings. In truth, any ‘system’ is a construct, something that we define by separating it from the rest of the universe, which we define as the “surroundings”. Ultimately, there is but one isolated system: the universe in its entirety. The changes that we experience could be viewed as nothing more than the rearrangement of some small parts of this all-encompassing oneness. As Parmenides might say to Heraclitus (or Heraclitus to Parmenides), the more things change, the more they stay the same.  

  • I’ve been teaching organic chemistry for about 10 years now, and introductory physical chemistry for maybe half as long. In that time I’ve had to explain certain concepts what feels like hunderds of times, and I’ve gotten pretty efficient at it. Any long time teacher of any subject will develop their own way of breaking down, organizing and repackaging their subject for their students. And eventually we all move on or retire, and the next generation of instructors comes along and starts the process over from scratch. This has its benefits. A fresh pair of eyes will confront the same old problems in slightly new ways. Plus, teachers must grow and develop in their subjects just like students. The learning is never done. Still, it seems a shame that so much of the work of advanced teachers, the culmination of long careers, gets lost like tears in rain.

    I encourage every teacher to save and refine their best material like they’re planning to write a book on teaching their subject. Not a standard textbook, more like a teaching manual. A handbook for new instructors in that subject. I started collecting my own material in a moleskin notebook. Now that my ‘grail diary’ has been filled up cover to cover, I’m expanding on my material and saving it in a folder on my computer. I’m working on a framework to bring my teaching material together into a coherent system, and I do intend to write the book. In some ways, this blog is meant as a preparatory exercise for that writing project.

    Below are a few snapshots of my collected lecture notes.

  • Life isn’t fair, is it? No one consulted us teachers and educators before launching these large language models that would effectively break many parts of education as we know it. And now it’s our responsibility to fix things, to rebuild and rethink, because no one else could do a good job of it. So, where do we go from here? How do we face the challenges that this disruption imposes, and does it create new opportunities to transform the teaching practice and the learning experience for the better?

    Let’s survey the challenges. The most obvious casualty of large language models is the essay assignment, or really any written assignment not completed in class. Cheating on essays, book reports and research papers has always been possible, but the chatbots brought the convenience and the likelihood of getting away with wholesale plagiarism to a level previously undreamt of. And because these generators can be accessed on any smartphone or device, students can potentially take them into class and use them on exams.

    But the challenge goes far beyond cheating. Students will outsource their thinking to these models, relaying teachers’ questions to them as a prompt and parroting back the output. This short circuits the learning process entirely. I’ve experienced this already in a large college lecture hall. After I explained the diminishing effect of inductive stabilization through sequential covalent bonds, a well-meaning student eagerly recited an explanation via analogy that she had just asked ChatGPT for. Like so much LLM output, the analogy seemed plausible at first blush but fell apart under scrutiny, and I took the opportunity to remind the class that ChatGPT doesn’t think, exactly. Now, I am sure that an A.I. optimist would say that this was a beautiful example of how chatbots can be leveraged to facilitate and enhance classroom discussion, but I see it as an unwelcome interloper that frustrates the learning experience by taking the student’s place in the driver’s seat. I want the student to engage with the material, to come up with their own metaphors and interpretations that we can analyze together, not to stand on the sidelines while I spar with a chatbot that’s stochastically simulating expertise in my subject. Worse yet, what about all of the students who go to ChatGPT later for a second explanation but don’t get the output immediately vetted by me and end up feeling falsely confident in a flawed or superficial understanding of a topic? How do we make sure that students are putting in the mental work necessary to build real understanding, and how do we fairly grade them for their effort?

    One major theme connecting many of the proposed solutions is that we must pivot from written assignments to assessing students in the classroom. Many have suggested going back to oral exams, putting more emphasis on classroom discussion, or having students compose drafts on the spot. Anything that requires them to demonstrate knowledge and understanding without the opportunity to consult a chatbot. However, there are considerable practical challenges to implementing these solutions. Oral exams eat up a lot of class time and can be downright impractical for larger class sizes. And having students compose a paper on the spot during class seems like a criminal misuse of precious instructional time. These solutions assume that we have a wealth of time with the students and resources that are simply not available.

    Perhaps a more practical solution is to change the structure of the learning experience so that students are forced to engage with the knowledge and concepts in real time. Even before the A.I. crisis, there has been a movement in higher education towards classroom innovations that allow for more student-led active learning. Classroom transformations such as the flipped class, where students watch recorded lecture videos ahead of time and spend class time working in self-directed groups under the teacher’s supervision, provide the opportunity to assess students on their work without sacrificing the instruction time. If implemented well, these or other student-centered and process-oriented approaches could be welcome innovations at all levels of education. I personally favor a hybrid approach that weaves elements of the flipped class into a traditional lecture so that students periodically break out into group learning sessions where they work on practical exercises that I build my lecture around, discussing things amongst themselves while I walk around and keep things on track.

    On a final note, let me appeal to those who don’t share my personal pessimism (doomerism, perhaps), towards generative A.I. in the context of education. If you think that chatbots are harmless or even have a positive role to play in the learning process, you still have to recognize that the landscape has changed because of them, and that the challenges we face as educators in the wake of this change, as attested to by students themselves, are real. And the potential solutions mentioned above are perfectly compatible with students using generative A.I. Indeed, they are predicated on the assumption that there is nothing we can do to stop it. But we cannot go on as if nothing has changed, grading papers we know were written almost entirely by A.I. This way will lead to mutual resentment and cynicism, until finally instructors say, ok two can play at this. Since the chatbots did the writing, I’ll let the chatbots do the grading. This leads to a complete lose-lose situation where, to paraphrase an old joke, we pretend to teach, and they pretend to learn. We can’t accept that outcome.

  • I don’t like generative artificial intelligence as it currently exists. Part of this is just a personal reaction. As an amateur artist, I felt my stomach drop when I saw a midjourney generated image win an art contest and the person behind it responded to the backlash by saying something like get over it, art is over, the machines won. As an educator, I watched in disbelief and disgust as academic dishonesty took a quantum leap forward with the proliferation of large language models. Moreover, it feels to me like the heat death of human creativity. I’ve heard about the upside too, all of the ways this technology is going to make our lives better and unleash everyone’s potential, but I remain an A.I. malcontent.

    I’m familiar with the moral debate about training these models and I know most of the arguments for and against the widespread proliferation of A.I. usage. I don’t intend to delve into any of that here. Instead, I want to offer a rationale for why I personally choose not to use generative A.I. at all, even if it could make my life a little more convenient in the short term. I have in mind a simple decision matrix, structurally similar to Pascal’s wager, pictured below.

    This compares the decision to embrace or abstain from using generative A.I. under two possibilities: that A.I. takes over everything, or that A.I. turns out to be just another tool. Now by ‘A.I. takes over’, I don’t necessarily mean that some powerful, sentient artificial general intelligence like Skynet from The Terminator or the Master Control Program from Tron has seized control. I simply mean that generative A.I. will have so thoroughly transformed the economic order and everyday life that, for better or worse, no human being’s skills, knowledge or abilities make any real difference anymore. The other possibility is that A.I. is still here to stay but ends up transforming our world to a lesser degree. It becomes just another tool, albeit a ubiquitous one. Like the smart phone. I won’t bother with other unlikely possibilities such as everyone gets sick of A.I. and it goes away in a few years.

    As I see it, in the case that A.I. is soon to take over everything, it won’t matter much whether I choose to adopt it or abstain from it in the interim. We will all be obsolete: scientists, coders, engineers, artists and prompt whisperers. Let’s hope we’re taken care of.

    If A.I. proves instead to be a ubiquitous tool that still requires a semi-skilled user, then the problem I see is that people’s cognitive ability and skills may wither away from disuse as we become overly reliant on generated content to replace our own output. This creates an opportunity for anyone who chooses to practice any given skill unaided by generative tools. Under such a scenario, such a person’s skills would become more valuable because they are rare. On the other hand, the better and more user friendly A.I. tools get, the more everyone will be able to use them to get more or less the same results out of them, conferring nothing special for having embraced their use. Therefore, abstaining from A.I. could lead to the best outcome, while having no worse a possible outcome than embracing it.

    Let me address the limitations of this model and anticipate some objections. It might be said that I am presenting a false dilemma because there are endless intermediate positions between total abstinence and the degree of reliance on A.I. that is likely to lead to the atrophy of skills and cognitive ability. I acknowledge that there is a spectrum of positions, but what I suppose I am saying is that the more one embraces A.I. usage, the more one’s skills will atrophy, and the more one abstains and practices these skills independently, the better these skills will be maintained. Some people will argue that using A.I. will not atrophy but actually enhance their skills (the underlying skill (writing, drawing, etc.) not the skills specific to prompting the A.I. or setting up an A.I. workflow). Here I think that people are confusing augmentation for improvement. A.I. may be augmenting your abilities when you use it, allowing you to output higher quality content than you would otherwise be able to produce, but is it making you any better at doing those things when the power is off? I suppose the jury is out on this one, but it seems to me that walking around all the time in a mech suit that does the heavy lifting for you is liable to make you weaker, not stronger.

    Another objection is that my entire premise is wrong because in a post A.I. world, these human skills won’t be valuable anymore. Who cares if you can draw a picture, write a book, write some code, or explain the significance of Boyle’s Law to an undergraduate when a passable synthetic approximation of any of those things is available at the click of a mouse? I have several responses to this objection. First, excellence, creativity and innovation are always needed in all skill domains, and generative A.I. of the kind we have now may not ever fully surpass us in these attributes. If it does, then we’re probably headed for the scenario where A.I. takes over everything, and none of this matters. Second, the fact is that many people care about the process, about authenticity, about having something that a person crafted in some kind of time-honored way. And they may be willing to pay a premium for it. Like the Narrator said in Fight Club, “I had it all. Even the glass dishes with tiny bubbles and imperfections, proof they were crafted by the honest, simple, hard-working indigenous peoples of… wherever.” Granted, this doesn’t apply to all domains (alas, I doubt many people care about ‘hand-typed computer code’). Lastly, it seems to me that from a self-care perspective, you want to try to avoid putting too many things on Easy mode in life. Many of us must go out of our way just to get enough steps in per day to maintain our long-term health. I’ve written elsewhere on this blog about the benefits of thinking and doing things for oneself so I won’t repeat them here, but this is going to increasingly be something that we must consciously seek out. So there are intrinsic benefits to cultivating these skills regardless of the economic situation of the future.

    This concludes my self-interested case for generative A.I. minimalism. I should say that I’m not trying to moralize here or shame anyone for choosing to use it. It’s hard to avoid it, frankly. And there are plenty of other things I should probably boycott but don’t out of convenience, so I have no leg to stand on anyhow. Finally, I must admit that this is an easy choice for me because as I mentioned, I find generative A.I. viscerally repellant. So, maybe I am wrong and will miss out, but this is my play and I will live with the consequences.

  • Teaching from models

    I decided to call this blog The Model Kit because the way that I have come to approach teaching organic chemistry is to begin with these simple physical molecular models and to teach the subject in a similar way as an artist learns to draw by observing an object.

    Learning organic chemistry is often compared to learning a second language because the ubiquitous drawings of molecules are akin to a pictographic language. The alphabet of this language is the periodic table of the elements, the molecules are its sentences, and chemical reactions are the stories we tell.

    This is a good analogy, as far as they go, but I always felt that it was missing something. Chemistry is like a second language, but it is also like visual art because on one level we are simply drawing an approximation of what these molecules look like. To be sure, our drawings are highly technical and abstract, and these are not objects that can even be seen by the naked eye. But we are still communicating structure, internal movement, and transformation by drawing pictures sequentially. Why not begin with that intuitive premise?

    Starting with the model and learning the molecular drawings by drawing the model is similar to how a comic book artist or mangaka starts by learning to draw realistic faces from life and then stylizes and simplifies down to an expressive and appealing cartoon style. They learn which details to emphasize and which to leave unexpressed. Ultimately, they will become conscious of the purpose of the drawings; what they are meant to convey. These models are simple, but they are undergirded by a system of theoretical parts that must click into place before the structure and reactivity of these molecules can finally be understood.

    Plus, it just makes the classroom experience more fun and memorable. Bring your drawing supplies and your model kits to the lecture and we will do something interesting and interactive with them. That is my goal.

  • At 45 years of age, I’m looking back and reflecting on the fact that I’ve spent most of my adult life in academia. I’m not sure whether I should find this surprising. I feel like I’m suited to it, but things could have turned out very differently. I always did exceedingly well on standardized tests, but in the end, I barely graduated high school. Sometimes people offer excuses on my behalf: “You were just bored.” Well, sure, but that was not the fault of the subjects I was studying or the teachers or the curricula. The fault was mine for not applying myself to the material and finding, creating, an interest in it. I can give my younger self a little slack and note that I was also clinically depressed throughout high school. Regardless, the fact is I was not looking like college material by the end of it.

    Instead, I enrolled in the School of Hard Knocks and got a traveling construction job for a year or so. The work was hard, but mostly unskilled, physical labor. I enjoyed it because it allowed my mind to wander freely for most of the day while my body was on autopilot toting concrete blocks off wooden pallets and building retention walls with them. But it wasn’t long before I decided that this wasn’t the life I wanted, and that my last chance at turning things around was to join the Army, have some adventures, and leave with the GI Bill college money that was on offer.

    My misadventures in the Army are a whole anthology worth of stories for another time. After I did my four years, I came home and enrolled in Pasco Hernando Community College. I was still gung-ho from the military so I set myself to the hardest goal I could think of: I would get on the pre-med track, go to medical school and become a doctor. I had to start at the bottom with remedial math courses, and I got a taste of science with intro to biology and was hooked. We learned about Darwin and Mendel and the discoveries that led to the early theory of evolution and common descent. Although I was not terribly religious, I was disturbed by the materialist implications of evolution and the firmness with which it seemed to close the door on much of the supernatural. I read Darwin’s book On the Origin of Species determined to find a fatal flaw in his reasoning and refute Darwinism. Instead, I emerged from that task with a profound respect for the scientific process and a new fascination with the natural world.

    After two years of community college, I gained admission to the University of Florida and declared a major in chemistry. Why chemistry and not biology, I can’t say. Because it wasn’t until I got to UF and took organic chemistry-that dreaded destroyer of majors, that pitiless weed-out course that the medical schools rely on to do their dirty work and thin the application pool-that I fell in love with chemistry. Biology was my first love, but it was the chemistry in biological systems that really interested me. The funny thing is that at the introductory level, chemistry is rather dry and boring, while biology is a treasure trove of fascinating wonders. But at the professional level, research in biology seems as dry as it gets, while professional level chemical research is dynamic and exciting. But of course, that’s my opinion. I’m a chemist!

    During my first semester at the University of Florida, I was spending my days wandering among the Collegiate Gothic architecture from one lecture hall to another, thinking of all the things in the world to there are to study, and I said to myself, “I could do this forever.”

    (to be continued…)

  • Our Chiral World

    Using the insights from organic chemistry to understand asymmetries everywhere.

    Asymmetries make the world interesting. They also arise spontaneously in the evolution of complex systems such as living beings. Asymmeties abound in organic biological molecules such as proteins, carbohydrates and the building blocks of DNA because carbon-based molecules share a property we are familiar with from the everyday world: they can exhibit handedness.

    Hands come in two varieties, left and right. The property of handedness is properly called chirality (from the Greek word “cheir” (χείρ) which just means “hand”). While there are many everyday objects that are chiral, hands are the most immediate example. We know what it is to be righthanded or lefthanded, and many tools are built with a particular hand (typically the right hand) in mind. We can see that there is a difference between our left and right hands. They don’t overlap in space in a one-to-one manner. Stack your right hand on top of your left, palms down, and you’ll see that the thumbs point in opposite directions. Try to fix this by flipping one hand over, and now one hand is palm up and the other palm down. They can’t occupy the same space in the same way; they are non-superposable.

    Another interesting feature of hands is that in a mirror image, a right hand is converted into a left hand, and vice versa. Chiral objects are not identical to their mirror image, but together they and their mirror image form a complete left-right pair. We can combine these observations into a complete definition of chirality: a chiral object is something that comes as a pair of non-superposable mirror images. A pair of chiral objects are called a pair of enantiomers.

    Chirality is a mysterious thing. There doesn’t seem to be anything to be said about the property of righthandedness other than that it is the opposite of lefthandedness. But what does this difference consist of exactly? Other than that they are non-superposable mirror images, anything one could say about an idealized right hand could be equally said about its mirror image. The order of the fingers, the distance bewteen palm and thumb, any internal measurements would be the same. This mystery has caught the attention of philosophers over the years, and Immanuel Kant and Ludwig Wittgenstein have offered interesting takes on the subject.

    It is only when a chiral object interacts with another chiral object that differences between left and right emerge. When we shake hands, we conventionally shake right hand to right hand. A left hand to left hand shake would be functionally equivalent. But try shaking right hand to left hand, and the grip becomes awkward. Left handed people, always in the minority, know that tools such as scissors or can openers (themselves chiral objects) are typically designed to fit into a right hand and are akward to operate lefthandedly. Borrowing a term from organic chemistry, we may call any system of objects made up of multiple chiral subunits diastereomers. In the simplest possible scheme, a system of two hands could exist as one of three diastereomers: right-left, right-right, or left-left. If the order matters, this number increases to four: right-left, left-right, right-right, and left-left.

    We see diastereomers everywhere. In combat sports such as boxing or MMA, fighters assume a chiral stance that matches their handedness, ‘orthodox’ for righthanded fighters and ‘southpaw’ for lefthanded fighters. The dynamic between two orthodox fighters will be different than that between an orthodox and a southpaw. Unlike enantiomers, diastereomers have functional differences that can be described without needing to compare to a mirror image. These differences also exist on the molecular level of biology.

    The philosopher Giovanni Villani has argued that chemistry is distinguished as the discipline in which systemic complexity is studied in its simplest form. The description of complex systems is incomplete without language that describes the asymmetries that interact with the design and function of these systems. Organic chemistry provides us with this language, and it is applicable to the entire world on every level.