Why are conservatives referred to as the “right” and liberals referred to as the “left” in politics? The answer involves the French Revolution, the quick spread of information through newspapers, and the tense interlude between the two World Wars.
Political beliefs are often described as being on a spectrum from left to right. Left refers to liberal views, such as advocating for progressive reforms and seeking economic equality by redistributing wealth through social programs. On the far left, we have revolutionary ideologies like socialism and communism. Right refers to conservative views, such as maintaining existing institutions and traditional values while limiting government power. On the far right, we have nationalistic ideologies like fascism.
Vive le France
The political descriptors left and right originally referred to the seating arrangements for members of the French National Assembly in 1789, who convened during the French Revolution to draft a new constitution. From the position of the speaker of the assembly, those seated on the right side of the room were nobility and high-ranking religious authorities. Those seated on the left side of the room were commoners and lower-ranking clergy members.
The division was originally caused over the issue of how much authority the king should have. Those in favor of the king having absolute veto power sat on the right, and those who favored limiting the king’s veto power sat on the right.
The higher-ranking members of society tended to be more pro-aristocracy and generally were more reactionary in their political views, while the lower-ranking members of society tended to be pro-revolution, more radical, and more centered on the needs of the lower and middle classes. Those who sat closer to the center of the room tended to be more moderate in their views than others in their faction. The left was “the party of movement,” and the right was “the party of order.”
Newspapers reported on left-wing and right-wing views, and the terms left and right spread quickly into popular usage in France.
Over the next century, the seating arrangements in the French legislature persisted at some times and were discouraged at other times. When the French Third Republic was established in 1871, the terms left, right, and center were used in the names of political parties themselves: the Republican Left, the Centre Right, the Centre Left, the Extreme Left, and the Radical Left were the major political parties of the day.
The Interwar Years
Right and left became widely used throughout Europe in the 1920s and 1930s, the years between the two World Wars where people “wrestled with the politics of nation and class” and found these labels to be a simplified way to describe complex political ideologies. Marci Shore, professor of European history, writes, “The interwar years were a time of a polarizing political spectrum: the Right became more radical, the Left became more radical; the liberal center ‘melted into air’ (to use Marx’s phrase)” (Carlisle, 2019).
Left and Right in America
Right and left entered usage in America in the 1920s and 1930s as well, but some shied away from the terms, especially left, throughout the mid-20th century due to connotations with extreme ideologies. The 1960s saw a shift toward people defining themselves more consistently with these terms in an effort to differentiate their views from others, as both liberals and conservatives were dissatisfied with the current political consensus. We see again that left and right were used as shorthand ways of categorizing people—a person on the right sees a person on the left as the “other,” and vice versa.
In America, “left” is often synonymous with the Democratic Party, while “right” is often equated with the Republican Party. However, political views span a wide spectrum, and some may fall in between the positions of the parties or way outside the bounds of either one. The definitions of left, right, and center are dynamic and change relative to one another throughout time. The terms meant something different during the French Revolution, in the Soviet Union, during the New Deal, and in America in 2021 and will continue to shift as parties and policies realign in a changing political climate.
Why does a rabbit leave colored eggs, candy, and nonedible novelties for children on Easter morning? The answer involves little ones leaving out an item of clothing overnight with the expectation that it will be filled with gifts, families providing a favorite snack for the mythical bringer of presents, and naughty children receiving a lump of coal . . . sounds familiar.
The Easter Bunny is a curiously unexplored phenomenon—the jolly figure of Santa Claus appears in Coca-Cola ads, Christmas cards, and the minds of children around the world, but his rabbit friend in the pantheon of holiday figures has no one recognizable image. Santa Claus, his elves, his workshop in the North Pole, his big bag of presents, and his magical sleigh pulled by reindeer are a cohesive set of traditions. But where does the Easter Bunny live? What does he look like? Where does the Easter Bunny get candy and eggs and other little trinkets to put in Easter baskets? And why, in the name of the spring fertility goddess, is the Easter Bunny (a mammal, we might remind you) associated with eggs?
As the most important holiday in Christianity, Easter is a celebration of the new and everlasting life that comes through the Resurrection of Jesus Christ. Springtime holidays from pagan and secular traditions also focus on celebrating life and fertility as the world begins to blossom and the sun begins to shine after the darkness and coldness of winter. Many of the symbols we have come to associate with Easter draw from this fountain of youth. Eggs and baby animals are living proof of fertility and new life. Pastel colors reflect the newly budding blossoms in the spring. The growth of Easter lilies from a bulb in the ground to a pure white, trumpet-shaped flower “symbolize the rebirth and hope of Christ’s resurrection” (History.com, 2021).
Rabbits, too, are a prominent symbol of new life: they breed like, well, rabbits. Some have estimated that a female rabbit might have up to 100 babies per season, or a total of up to 1,000 babies over a lifetime! (MentalFloss, 2015). For this reason, they are also an ancient symbol of fertility and thus have a natural association with spring holidays. The Easter holiday is the Christian celebration of the Resurrection of Jesus Christ, but the name Easter comes from the festival of Eostre, the Saxon fertility goddess, whose German name is Ostara. Some have conjectured that the Saxons believed Eostre’s animal symbol was a bunny or she had a hare as a companion, though there is little evidence in the historical record for such a claim. (Instead, later scholars may have theorized such an association to retroactively explain traditions that existed in Europe later on.) Stephen Winick at the Library of Congress explains that common observations about rabbits, eggs, and the budding of new life in the spring led to many similar traditions throughout time, whether or not a direct relationship is present between any of these traditions: “In short, we don’t need a pagan fertility goddess to connect bunnies and eggs with Easter—springtime makes the connection for us all by itself” (Winick, 2016).
The Osterhase and His Hase-Eier
Like many holiday traditions celebrated in America, the Easter Bunny has its origins in Germany. German immigrants to Pennsylvania in the 1700s brought stories about an egg-laying rabbit called the Osterhase (“Easter Hare”). Among the Pennsylvania Dutch, children made nests for the Osterhase and left carrots for him to eat as fuel on his journey, in hopes that he would leave colored eggs for them the night before Easter. Children often used hats and bonnets as nests, sometimes placing them outside in a garden or a barn where a bunny would have the easiest access. In some versions of the story, the bunny lays the eggs, while in others he brings them in a basket. The Easter Bunny was also a judge: tradition has it that he only gave eggs to children if they were good, to encourage children to behave themselves during Eastertide. Misbehaving children might receive rabbit droppings or coal instead.
Georg Franck von Franckenau’s 1682 essay “De ovis paschalibus” (“About Easter Eggs”) describes an Easter egg hunt of sorts, where the Easter Hare lays Hasen-Eier (“hare eggs”) hidden in the garden and grass for children to find. They would then feast on the eggs (real ones rather than candy-filled plastic!). Eating so many eggs without salt or butter would cause a stomachache, doctors warned—bet they never envisioned the mass sugar rush children today have from feasting on chocolate eggs.
A True Renaissance Rabbit
So why does the bunny deliver eggs, and why he is a male?
Anciently, it was believed that hares were hermaphrodites, meaning that they had the reproductive equipment of both a female and male. Pliny, Plutarch, and other great thinkers thought that hares could switch sexes at will and even impregnate themselves. So though we speak of the Easter Bunny as a he—even though it wouldn’t make sense for a male (or a bunny for that matter) to lay eggs—into the Renaissance, hares in general were not believed to be strictly male or female. This led to an association of the hare with the Virgin Mary, due to its supposed capacity to reproduce while remaining a virgin (which we now know is definitely not true). Renaissance art reflects this association.
Whether or not the bunny actually lays the eggs or just delivers them (did he steal them from a chicken?), eggs represent the potential for new life when a baby chick hatches as well as symbolizing the emergence of Christ from the tomb. Because of this dual symbolism, the Easter Bunny pays a visit to people of different faiths or no faith. It exists as a tradition that draws upon symbols that can be interpreted in light of different religious beliefs, whether Christian or not. The widespread appeal likely contributed to the growing popularity of the Easter Bunny throughout the nineteenth and twentieth centuries in America.
Also, in the twentieth century, nests turned into baskets, real eggs turned into plastic eggs, and the Easter Bunny’s gifts expanded to include chocolate, jelly beans, and small toys. Candy companies capitalized on the Easter Bunny tradition by marketing spring-themed candy and other odds and ends for Easter baskets, further reinforcing the practice.
Other Bringers of Easter Cheer
The Easter Bunny isn’t the only bringer of springtime cheer. In Switzerland, the Easter Cuckoo makes the rounds, while some parts of Germany receive visits from the Easter Fox or the Easter Rooster. In Australia, the Easter Bilby initiates the springtime festivities (and don’t mention the Easter Bunny to an Australian—the overabundance of rabbits as an invasive species introduced in the eighteenth century has led to the endangerment of native animals).
We could have just had an Easter Hen. That would have made much more sense, and baby chicks are already associated with springtime festivities. But if we’re making up a mythical creature, we might as well stretch our imagination a little further!
Why are baby girls dressed in pink and baby boys in blue? The answer involves marketing tactics, a pair of famously misconstrued paintings, and ultrasound technology.
White Dresses for All
Throughout history, socially defined rules have dictated certain types of clothing that are suitable for certain people. What you might not realize is that socially defined rules also determine at what age this gender distinction begins to matter—men, women, and children are often seen as different categories of people, each with their own typical styles of clothing.
Take a look at young Franklin D. Roosevelt:
This picture, taken in 1884, shows two-and-a-half-year-old Roosevelt wearing a white dress, a feathered hat, and a long head of hair. These are things that today would be considered more suitable for a little girl, but they were typical for both genders of the upper class in the nineteenth century and earlier. In the Victorian Era, gender was not considered significant in a child’s life until about the age of seven, and little boys and girls generally wore the same types of clothing.
At age seven, boys went through a rite of passage called “breeching,” which involved dressing in pants and getting a haircut. Girls continued to wear short dresses, and as they grew older, their prescribed hemline length grew longer until their dresses reached the ankles around age 16.
Practically, having both little girls and little boys wear dresses saved parents a lot of time. Slipping a dress over a child’s head was much easier than buttoning up pants, and it simplified potty training. Clothes could also be reused for another child in the future, regardless of the child’s gender.
In earlier centuries, infants and young children had worn colored dresses in many different hues irrespective of gender. At other times, they had worn clothing that resembled those of their adult parents, reflecting a view of children as merely small adults who needed to grow up and begin working as soon as possible to help provide for the family. But in the age of bleaching, cheap cotton, and childhood, white dresses were the norm. White also had a connotation of purity and innocence, which seemed appropriate for small children.
Among Catholics, both girls and boys were sometimes dressed in blue to honor the Virgin Mary. (The same thing sometimes occurred for wedding dresses.)
Lighter tones and pastel colors followed and came to be associated with babies, though these colors were not gender-specific.
Pink and Blue as Gender Identifiers
Beginning around the mid-nineteenth century, the colors pink and blue came to be used as gender signifiers.
Items like ribbons, bows, and baby blankets were made in shades of light blue or pink to indicate whether a child was a girl or a boy. Dresses and other clothing soon followed.
Until the 1940s, two conflicting traditions existed. Magazines, advice columns, and other literary references were divided in the advice they gave to new parents. Some continued to recommend light, pastel colors in general. Some recommended mixing pink and blue for a lavender color. Some, like the 1890 Ladies’ Home Journal, explained:
“Pure white is used for all babies. Blue is for girls and pink is for boys, when a color is wished.”
(Emma M. Hooper, “Hints on Home Dress-Making” Ladies’ Home Journal, November 1890, p. 23)
Others such as Godey’s Lady’s Book noted, taking a page from sources in London and Paris,
“Blue is the color appropriated to male children, as rose or pink to those of the opposite sex.”
(Godey’s Lady’s Book, volumes 52–53 ,edited by Louis Antoine Godey and Sarah Josepha Buell Hale)
Marketing copy, magazines, and literary sources often cited “pink for girls, blue for boys” as the French fashion, which was a convincing reason for many people to follow this trend. The beloved 1869 novel Little Women, showed this inclination:
“Are they boys? What are you going to name them?”
“Boy and girl, aren’t they beauties?” . . .
“Amy put a blue ribbon on the boy and a pink on the girl, French fashion, so you can always tell.”
(Louisa May Alcott, Little Women, Chapter 28)
These two conflicting gender assignments for pink and blue continued well into the twentieth century, and other countries had similarly mixed traditions—from Mexico to Switzerland to Korea, baby boys were dressed in pink, and blue was the preferred color for girls, but other countries reflected the fashions of England, the United States, and France. Some have attempted to explain that little girls wore blue because it was associated with the Virgin Mary and was seen as a more delicate and calm color, and little boys wore pink because it was a lighter version of red, which was seen as a strong, active, passionate color.
The Shift toward Gender Coding
According to historian Jo B. Paoletti, around the turn of the twentieth century, psychological studies on child development led some child care experts to conclude that parents should make a greater distinction between the appearance of girls and boys from a younger age. It was common for mothers to be told to dress their little boys in pink so that they grew up to be more masculine and to dress their little girls in blue so that they grew up to be more feminine. Not everyone was comfortable with this at the time due to the tendency to see children as “sexless cherubs” (see Paoletti, p. 89). Though pink-blue gender coding was known even during the Victorian Era, as we have seen, it did not necessarily become widespread in the United States until about the 1950s.
However, the shift toward gender coding in terms of color had begun, along with the styles of clothing that were deemed appropriate for babies. In 1927, Time magazine published a chart describing the appropriate colors for girls (blue) and boys (pink). Though the assignment of the colors differed regionally, department stores gave similar advice—if they could convince parents that they had to buy a whole new wardrobe for a baby girl and a baby boy, parents would end up buying more baby clothes rather than reusing them.
In the 1940s, however, clothing manufacturers and popular advice columns flipped the script and began promoting pink as the color of choice for girls and blue as the appropriate pick for boys. During World War II, little boys began to be dressed in pants and had short hair, emphasizing a particular view of masculinity that reflected the clothing their fathers wore, whereas little girls continued to wear dresses like their mothers. Children began to be dressed as mini adults in a way that emphasized their gender.
What we’re looking at is not a full-scale reversal of the colors assigned to girls and boys, but a larger-scale promotion of one practice and the quiet discontinuation of the other.
The Blue Boy and Pinkie
Art history has something to say about gender coding as well. When millionaire Henry Huntington purchased two eighteenth-century paintings, The Blue Boy and Pinkie, the paintings were widely publicized by the press, and suddenly Americans began to think that “pink for girls, blue for boys” had been right all along. The Blue Boy and Pinkie are inseparably connected in the minds of many viewers, their misguided takeaway being that the colors indicate a long-standing tradition in gender color coding. (In fact, the paintings were done about 25 years apart by different artists, and the clothing styles represented in the paintings are separated by about 150 years. The artists had no conceivable gender-coding agenda in mind, either.)
Rejection and Revival
The 1960s and ’70s saw a rejection of gendered clothing and color in the second wave of feminism and other countercultural movements. Unisex clothing became more popular for young adults and children alike. In addition, feminist activists launched an anti-pink crusade in the 1970s as part of a larger movement to reject traditional gender norms and free women from the many cultural constraints that had been placed upon their sex. Ironically, this actually solidified pink in the minds of many as being essentially associated with femininity.
In the 1980s, gender color coding was back in fashion and stronger than ever. Ultrasound techniques that allowed parents to know the gender of their child before the child was born contributed to a revival of pink-blue gender coding. Now, the parents could announce the gender of the baby beforehand, friends and family could give pink or blue gifts to expecting mothers at a baby shower, and there were new ways for companies to market baby products of all kinds based on color. Clothing manufacturers and retailers targeted this market aggressively, pushing the “pink for girls, blue for boys” tradition for baby clothes. The pink-blue divide became more visible and more firmly embedded in the minds of American consumers. And in the age of pregnancy announcements on social media and gender reveal parties, “It’s a boy!” might as well just be “It’s a blue!”
Pink and Blue Today
Today, few parents would think of dressing their baby boy in pink, and many would think twice about dressing a baby girl in blue without also marking her gender in another way (such as the style of her clothing or a bow in her hair). Both men and women wear blue freely, as they have for centuries. But when grown men wear pink, it can come off as a social statement about defying gender roles, or they may feel that they need to justify their clothing choice. Wearing pink is often seen as too feminine for men. And the prejudice against men wearing pink is really a prejudice against women—the fear of appearing effeminate stems from society devaluing a color (or anything else) that has been culturally assigned to women, reinforcing sexism at a deeper level. Older girls sometimes protest wearing pink out of a desire not to appear like a “girly girl,” as if that were a negative thing. The devaluing of women and anything seen as feminine (even though there is not necessarily anything inherently feminine or masculine about pink or blue) hurts both boys and girls, as boys are told not to appear feminine and girls are told not to appear too feminine, regardless of how they may personally want to express themselves. It sends the message that anything too “female” is less important, less valuable, less capable of being taken seriously, whereas anything “male” is the default.
Cultural bias against women is changing, and with it, perhaps pink-blue gender coding as well. It is becoming more and more acceptable for men to wear pink, especially for the younger generations. A push to see gender on a spectrum rather than a male/female binary has also influenced attitudes toward gender coding in childhood. “Gender-neutral” often still means “not pink or blue,” but it is becoming more common for babies to wear gender-neutral colors, receive gender-neutral names, and sleep in a neutral-colored nursery room.
The future of gender color coding is in flux—with the opposing influence of gender reveals and gender-neutral baby products, pink and blue could become just colors, or they could be reinforced even further as gender signifiers.
Where did Rock, Paper, Scissors come from? The answer involves a Japanese game called jan-ken but probably does not involve Celtic settlers in Portugal and the French general who aided George Washington during the Revolutionary War.
First, let’s clear something up—“rock, paper, scissors, shoot” or “rock, paper, scissors”? “Rock, paper, scissors,” or “paper, rock, scissors?” Best two out of three? How do we agree on the rules? Maybe we could decide with a tiebreaker, a hand game of sorts . . .
Sansukumi-Ken: The Origin of Rock, Paper, Scissors
The first known reference to a game using finger signs is a painting on a tomb wall in Egypt dating to 2000 BCE. A precursor to Rock, Paper, Scissors using three distinct hand gestures was first played in China during the Han dynasty, around 200 BCE. The game was called shoushiling, according to Xie Zhaozhiin his book Wuzazu, written in the 1600s.
This game was then introduced to Japan, spurring an entire genre of hand games known as sansukumi-ken. This translates to “the ken (fists) of three who are afraid of one another,” in reference to three hand gestures used in the games where A beats B, B beats C, and C beats A. These hand games were often coupled with drinking and were sometimes played in brothels. One speech made in 1809 recounts a ken tournament in Nagasaki’s red-light district with feasting and dancing. At some point, these games shed their association with drinking, stripping, and prostitution and began to be played by children.
The earliest recorded sansukumi-ken game was known as mushi-ken. This game involved three gestures: the frog (the thumb), the slug (the pinky finger), and the snake (the index finger). The frog defeats the slug, which defeats the snake, which defeats the frog. Another popular version called kitsune-ken featured a supernatural fox (kitsune) well-known in Japanese mythology, who defeats the village head, who defeats the hunter, who defeats the fox.
So the game could have called “Frog, Snake, Slug” instead—or maybe “Foxhunt.”
The most common version today is called an-ken and features rock, paper, and scissors. This variation developed in the nineteenth century and spread beyond East Asia for the first time in the early twentieth century. Sepp Linhart, author of “From Kendo to Jan-Ken: The Deterioration of a Game from Exoticism into Ordinariness” indicates that the global appeal of the an-ken version of the game stems from its use of simple, ordinary objects that were familiar to a wide audience.
Through increased contact between the East and the West, sansukumi-ken games from Japan were introduced in England, Australia, the United States, and France. Newspaper articles and letters in the 1920s and 1930s described the game as a method of casting lots, gambling, or settling disputes, going into detail about the specifics of the game for those who were yet unfamiliar with it. The game was also known as “zhot” or “jan-ken-pon.”
There are other potential sources of Rock, Paper, Scissors since there are similar games found in cultures around the world, and internet legends abound. According to the Straight Dope, some have purported that the hand game made its way into common knowledge by way of a Celtic tribe that settled in Portugal in the sixth century BCE. The game spread throughout Portugal in following centuries. Pihedra, Papelsh e Tijhera, as the game is now called in Portuguese, spread further due to the Roman invasion of the Spanish Peninsula and subsequent intercultural contact. However, the game was seen as a potential threat to Roman rule and was suppressed in the British Isles until 350 CE. This explanation lacks any real evidence, but it’s just one example of a potential parallels across cultures. The hand game played today in many countries around the world was most likely spread from Japan rather than from similar hand games found among the Celts or any other group of people.
Why is Rock, Paper, Scissors sometimes called roshambo? For some unknown reason, the game became associated with Jean Baptiste Donatien de Vimeur, Comte de Rochambeau, who commanded the French Expeditionary Force sent to help the United States during the Revolutionary War. His name was used as a code word during the battle of Yorktown, in which the British army surrendered to the United States. Since Rock, Paper, Scissors was not widely known in the West in the eighteenth century, there is little basis for Rochambeau knowing or using Rock, Paper, Scissors to settle a dispute. Additionally, the earliest known use of “roshambo” is from 1936 in a book called Handbook for Recreation Leaders.
Linguist and language commentator Ben Zimmer hypothesizes that children in the San Francisco Bay Area (an area home to many East Asian immigrants) in the 1930s may have combined their knowledge of the defeat of the British at Yorktown, which they had learned about in school, with the new, popular hand game in which they tried to defeat their opponents or settle disputes. The name of the famous general was Americanized and became roshambo. (Anyone up for a game of roshambo? Who wants to be the British?)
And just for fun, here are some other variations on Rock, Paper, Scissors:
Rock, Paper, Scissors, Lizard, Spock (United States)
Ant, Human, Elephant (Indonesia)
Tiger, Village Chief, Village Chief’s Mother (Japan)
Bird, Water, Stone (Malaysia)
Muk-zzi-ppa, where the goal is to get your opponent to play the same sign as you (Korea)
Why is the painful cramp you sometimes get in your leg called a charley horse? The answer involves baseball and continual adaptation of oral history.
What Is a Charley Horse?
A charley horse occurs when a muscle contracts involuntarily, causing a painful cramp that can last from just a few seconds to a whole day. They occur most commonly in the legs and feet but can happen elsewhere in the body.
These cramps can be caused by a number of things, including inadequate blood flow to the muscles, injuries, overusing a muscle, and stress. Another common cause is a mineral imbalance due to inadequate potassium, calcium, or sodium in the blood, which can be caused by dehydration.
Charley horse formerly referred to a muscle injury in the leg that caused blood to pool outside of the blood vessels. This is now known as a dead leg and often causes pain and limited mobility for several weeks.
So Who’s Charley?
The origin of the term charley horse to describe a muscle cramp is murky, but all sources point toward an origin in baseball.
The oldest use of the term was in an 1886 letter published in the Louisville Courier-Journal. Jim Hart, manager of the Louisville Colonels baseball team, wrote:
Ely is still suffering from a sore arm, and Reccius has what is known by ball players as “Charley Horse,” which is a lameness in the thigh, caused by straining the cord.
One well-known origin story of the term holds that “Charley” was a lame horse that pulled the roller to prepare the field at the Chicago White Sox ballpark (World Wide Words).
In a similar vein, baseball official Bill Brandt explained the term as a reference to a lame horse named Charley in Chattanooga, Tennessee, who pulled things around the ballpark. Between practice and the start of a game, the players watched as Charley dragged a dust-brush around the baseball diamond. When a player on the team suffered from a pulled tendon or other injury that caused limping, the other players would jokingly refer to him as “Charley Horse” (Shulman, 1949).
However, Brandt offered a different explanation shortly after this statement. He cited a joke made by coach Billy Sunday about a hobbling baseball player, in an analogy to a horse race the players had made a bet on. This explanation is doubtful as well, and some have conjectured that Brandt changed his story to honor Sunday shortly after Sunday died.
Henry Mencken, author of The American Language, conducted an investigation into the term at the request of the editors of Webster’s New International Dictionary, Second Edition. Mencken’s research found several different explanations, none of them more plausible than the rest:
In 1934, Baltimore Orioles second basemen Bill Clarke claimed that the term referred to “Charley Esper, a left-handed pitcher, who walked like a lame horse.” However, the term was in use long before Charley Esper ever joined the Orioles.
In 1944, Billy Earle, a catcher who jumped around to several teams and dabbled in hypnotism and spiritual healing on the side, said the term was suggested by a Sioux City groundskeeper named Charley who had a horse.
In 1943, Dr. Logan Clendeming claimed that a charley horse was a ruptured muscle (based on the previous medical definition of the term), and it occurred in the same way that a horse suffered a string-halt. He seems to have connected the two based on pathology, though it remains unclear from this explanation exactly who “Charley” was.
None of Mencken’s proposed etymologies truly fit the bill in light of the earlier usage of the term. Apparently, Webster’s agreed: In Webster’s New International Dictionary, Third Edition (1961), charley horse was said to come from “the occurrence of Charley as a typical name for old lame horses kept for family use” (Woolf, 1973).
As one last explanation, the American Dialect Society cites an article in the Washington Post from 1907 that attempted to explain the term, which had already been in use for a few decades at that point. The article postulated that charley horse made reference to pitcher Charley Radbourne, who was affectionately nicknamed “Old Hoss.” Radbourne suffered a muscle cramp during a game in the 1880s. To describe the condition, the name charley horse was coined by putting together the pitcher’s first name, Charley, with part of his nickname, Hoss (a variant of horse; slang for a large, strong, and respected person). Thus, charley horse.
Sorry, charley—no one knows exactly where the term charley horse came from. All we can say for sure is that it became popular among baseball players in the 1880s and 1890s. Though there are many different theories, etymologists and historians continue to disagree about who originally coined the term. It’s likely that players used the term to poke fun at one another and that the retelling of their own stories became the origin of the phrase from their own point of view. The continual shaping and reshaping of oral history was a way for baseball players to make the term uniquely their own and stake a claim in the lingo of the game.
Why does the heart shape look absolutely nothing like a human heart? And on a related note, why is the heart, anatomically correct or otherwise, associated with love? The answer involves herbal contraceptives, pinecones, and Aristotle’s faulty understanding of human anatomy.
If you had to pick one symbol to represent love, what would it be? It would probably look like this:
And you would probably say that it’s a heart. But the human heart looks like this:
And you would say that this giant muscle—which beats an average of 100,000 times per day and pumps about 70 gallons of blood through your body each hour, generating enough pressure during a contraction to squirt blood 10 feet if the aorta were cut open—represents . . . love.
The heart is a fairly single-minded muscle. Its main job is to pump blood throughout your body, and the organ itself isn’t necessarily the origin of love in the body.
In the words of Bill Bryson,
It has been calculated (and goodness knows how, it must be said) that during the course of a lifetime the heart does an amount of work sufficient to lift a one-ton object 150 miles into the air. It is a truly remarkable implement. It just doesn’t care about your love life.
(Bryson, The Body: A Guide for Occupants, 112.)
Let’s take a closer look.
The Heart of Love
Throughout many different cultures and religions, spanning thousands of years of human history, the heart has been regarded as the seat of human emotion, life, and will. In the ancient Near East, the heart was both the seat of emotion and the location of the mind, functions that were also associated with the bowels. The Aztecs extracted the hearts of human sacrifices to offer to the gods and regarded them as the seat of the individual, the sun being a great heart-soul. In Hinduism, the heart represents the atman, the divine center or true soul of a person. Classical philosophers in the tradition of Aristotle also believed thought and reason occurred in the heart rather than the brain, which we now know is not the case. Really, emotion is created in the brain, too, and experienced thorough physiological reactions, which might involve changes in blood flow, heart rate, and hormone secretion.
The ancient Greeks and Romans linked the heart to strong emotions, and Greek poetry connected the passions of love with the heart. Venus, the goddess of love, directed Cupid to set human hearts on fire with love, as the Greeks believed.
In Medieval Europe, the idea of wholehearted, devoted, romantic love became idealized in the feudal courts of France. A young man would play instruments and sing to a lady he hoped to woo, pledging his whole heart to her forever. The yearning, romantic sentiments found in courtly love spread to Spain, Italy, Portugal, and all over Europe, and “love staked out its place not only as a literary concept but also as an important social value and an intrinsic part of being human” (Yalom, 2019).
How Do We “Feel” Love?
From a scientific standpoint, the heart and the blood it pumps both play a role in our experience of emotions, including love. When we blush with embarrassment or redden with anger, it’s because our blood pressure increases as a reaction to our thoughts about a humiliating or enraging situation. The many blood vessels in the face show these variations in blood flow (Martinez, 2018). And when you feel nervous around someone of the opposite sex and experience the fight-or-flight response, more blood is directed to the arms and legs, preparing the body for action. This can be a bit annoying when the only action you’re looking for is asking someone out on a date. These bodily responses to emotion, however, are not necessarily universal—physiological responses to and drivers of emotion depend largely on cultural context (Butler, Lee, and Gross, 2018).
Linda Feldman Barrett has described the brain’s process of creating emotion as different brain regions spontaneously acting together to produce a feeling based on various inputs. The feeling is shaped by a person’s previous experiences and cultural understandings of emotion concepts (Bryce 2017).
Though the physiological responses and outward manifestation of emotions may be culturally distinct, cultural universals may be found in the area of the body where certain emotions are felt. In a study of both West European and East Asian subjects, love was described as a warm feeling in the upper and middle regions of the body, seemingly radiating out of the center of the chest. The researchers concluded that the somatosensory experience of different emotions, including love, can be mapped to certain areas of the body. (Nummenmaa, Glerean, Hari, and Hietanan, 2014).
It makes sense, then, to use the heart as a metaphor for love—we embody the emotions we feel in a very real, physiological way. We feel emotions because our physiological response creates those feelings. We can feel love in our heart—that warm, sometimes fluttery feeling in our chest that radiates outward–as an embodied experience of affection for another person.
The Heart Shape
The heart shape as we know it was first used to depict plants rather than human organs. Until the late Middle Ages, the heart shape commonly represented peepal leaves in the Indus River Valley; silphium in ancient Greece, Rome, and Northern Africa; and water lilies, fig leaves, and ivy in Europe. Silphium in particular was linked to love and sexuality due to its use as a contraceptive, and its heart-shaped fruit was featured on coins in Cyrene as early as the sixth century BCE. Additionally, ivy was noted for its longevity and was seen as an emblem of eternal love.
The first known—although contested—depiction of a heart shape as a representation of love was in an illustration found in the French text Roman de la poire, [AS3] dating to the 1250s. A capital S is decorated with a lover offering his heart to his mistress. It looks like an upside-down pinecone, or perhaps a pear, with the narrow end facing upward. This is consistent with descriptions of the heart in anatomical literature of the time (Aristotle also mistakenly taught that the heart had three chambers instead of four, leading to incorrect anatomical descriptions that were not corrected until the sixteenth century). In the scene in the manuscript that this illustration accompanies, a lady gives a pear to her lover, which is an allusion to Eve offering a piece of fruit (believed by many at this time to be an apple) to Adam in the Garden of Eden.
A similar scene is illustrated in The Romance of Alexander, a 1344 French manuscript by Lambert le Tor. The lady lifts a heart-shaped heart that her beau has given her as he touches his chest, from whence the heart came. This manuscript led to “an explosion of heart imagery,” especially in France.
An early depiction in Italy was Giotto’s 1305 painting of Charity, one of the seven virtues personified in the Scrovegni Chapel in Padua. Charity hands to Jesus a pinecone-shaped heart with the tip facing upwards—symbolically offering her love. This theme was reflected in several other works of art in Northern Italy in the fourteenth century.
By the mid-fourteenth century, the heart or pinecone shape had been turned upside down with the point facing the bottom, and around the same time, the wide part of the symbol took on a more scalloped look. Thus, the modern heart shape was born. It became popular in Europe around the sixteenth century and was used in religious imagery, such as the Luther Rose and the Sacred Heart, inspiring fervent devotion to Jesus and a sign of monastic love.
As we can see, the heart wasn’t limited to romantic love. As the seat of all emotion, the heart particularly represented faithfulness and bravery. A heart on a coat of arms was a symbol of courage—the very word itself is derived from cor, meaning “heart” in Latin (Jauhar, 2018, p. 20). Metaphors in many different languages attest to the different strong emotions attributed to the heart—to “speak from the heart” is to be sincere, to “take heart” is to be brave, repentance and reconciliation require a “change of heart,” and the Grinch’s heart was lacking in compassion, for it was “two sizes too small.”
Another drastic change in the use of the heart icon, also known as the cardioid, was in 1977, when the “I ❤ NY” logo was created to attract tourists to a struggling New York City. The heart was not seen only as a symbol of romantic love—it encapsulated a fondness for an iconic American city, spurring spin-offs and cliched T-shirts for everything imaginable in addition to positively changing the perception of New York. Heart was now a verb synonymous with love, depending on how you read the ❤ symbol out loud.
In 1999, when the first emoticons for mobile communication were released, the heart symbol visually communicated love in a quick and simple way. Chat rooms, text messages, and social media reactions over the next two decades until the present have only increased the use and visibility of the heart emoji. On the latest iPhone, there are 24 unique heart emojis, plus more that include hearts as part of a larger image—and there’s even an anatomically correct one! (Click here for an n-gram analysis showing how different heart emojis are used, if you’re into that kind of thing.)
The heart shape is now an undying symbol of love, whether that love is undying or not. And whether or not the heart itself creates emotion, it is an important part of the way we feel emotion. What does love feel like to you?
Butler, Emily A., Tiane L. Lee, and James L. Gross. “Does Expressing Your Emotions Raise or Lower Your Blood Pressure? The Answer Depends on Cultural Context.” Journal of Cross-Cultural Psychology, vol.40, no. 3 (2009), 510–517. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4260334/.
Why is a good gardener known as a green thumb? The answer involves a vegetable-loving king, a wartime radio show, and a dishonest corn miller.
In American English, a person with skill for gardening is sometimes called a “green thumb.” The expressions “having green fingers” and “being green-fingered” are the equivalent in British English. And the opposite—someone who lacks skill at growing plants—is known as a “brown thumb.” But just how did these expressions come to be?
Thumbs and Fingers
One theory is that algae grows on the underside of earthenware pots, and it can stain a gardener’s fingers green if he or she handles them often enough. A gardener who puts in the time and effort to work with enough gardening pots could literally have a green thumb.
Another, albeit dubious, theory comes from a story about King Edward I, who loved green peas and kept half a dozen servants shelling peas when they were in season. He rewarded the servant who shelled the most as evidenced by having the greenest thumb.
“Green fingers” was the phrase recorded first, however, and it was used as early as 1906 in the novel The Misses Make-Believe by Mary Stuart Boyd. Boyd wrote of “what old wives call ‘green fingers’: those magic digits that appear to ensure the growth of everything they plant.”
“Green thumb” was first recorded in a 1937 Ironwood Daily Globe newspaper article noting that it was slang for “a successful gardener with instinctive understanding of growing things.”
Both phrases caught on in the 1930s and ’40s when they were used on a popular BBC radio program called “In Your Garden,” hosted by C. H. Middleton.
The Green Thumb and the Golden Thumb
The thumb in particular being green may have been an analogy to a Middle English proverb: “An honest miller has a golden thumb.” This phrase originated around 1386 in The Canterbury Tales, in which Geoffrey Chaucer writes that the miller “hadde a thombe of gold.” Chaucer tells us that the miller also stole corn and charged three times what it was worth, yet he was regarded as having a gold thumb. There are various interpretations of this saying. One interpretation is that millers were widely regarded as being dishonest, so even the most trustworthy still took a secret cut. Nobody really has a golden thumb, so a truly honest miller doesn’t exist. Along those lines, millers sometimes deceived customers by using a finger or thumb to press down on the scale when weighing grain, thus driving up the price. Another interpretation is more along the lines of the miller having a Midas touch—grain seemed to turn to gold in his hands because of how lucrative his business was. So perhaps a golden thumb could refer to someone with a skill for making money, often in a dishonest way.
The golden thumb and the green thumb could be siblings in the family of English idioms—or they could be unrelated. The green color of the thumb could have some literal meaning—or it could simply be an association with the color of plants. There is great temptation to make connections between phrases and ideas without real evidence from historical usage, but what we do know is that gardeners work with their hands, digging and pruning and using their thumbs and fingers to work with all shades of green plants.
A Modern Twist
We also know that gardening requires skill, patience, and effort to bring about the rewards of flowers and fruit.
One modern interpretation of “green thumb” was given by London Brockbank in a worldwide broadcast in which she discussed her experience working in her family’s sizeable garden in her youth. In an interview with a religious leader, she said,
“Everybody likes and enjoys picking the fruit . . . but I’d say probably weeding is the most challenging because you’re down on your hands and knees, and after a while you start to ache. And your hands are dirty. We would stain the tips of our fingers and our thumbs green from pulling.”
The interviewer responded, “That’s why they said you had a green thumb.”
Brockbank replied, “Yes, you’d think it was because the plants grow well; it’s because the weeds are getting pulled.”
“Green thumb” has often been taken to mean that a natural, inborn skill for gardening. But it seems that a successful harvest can come from the diligent efforts of any dedicated gardener who is willing to work through the weeds.
Why do brides wear white wedding dresses in Western tradition? The answer involves a parade of British royalty, including Princess Philippa, Queen Victoria, and Princess Diana.
The earliest record of a bride wearing a white wedding dress was Princess Philippa of England when she married the Scandinavian King Eric in 1406. She wore a white silk tunic lined with squirrel and ermine fur.
In 1556, Mary, Queen of Scots, also wore a white wedding gown when she was married to Francis Dauphin of France, despite the fact that the French customarily wore white in mourning.
Up until the mid-nineteenth century, some wealthier brides had a new dress made for their wedding, sometimes white, but often gold or blue or heavily brocaded with silver thread. Those who were of more humble circumstances simply wore their best dress they already had in any color. At this time, red was a popular color in eastern Europe, black was common in Scandinavia, and those in America and western Europe often wore blue, yellow, brown, or gray. Wearing a white dress symbolized wealth and status, more than anything: white was a rare and expensive color before the mastery of bleaching techniques, and only the rich could afford an elaborate, impractical dress that would be costly to keep clean. Generally, women repurposed their wedding attire for formal occasions after the wedding. Before the industrial revolution and the mass production of textiles, it would have seemed absurd to wear any dress only once, even for the upper crust of society.
The Victorian Wedding Dress
In 1840, Queen Victoria married Prince Albert in a now-iconic white lace dress. It both reflected and set the fashions of the age—the champagne-colored dress, with an off-the-shoulder neckline, a tight bodice hugging the Queen’s natural waist, and a full skirt held out with petticoats was the height of style in the Victorian Era. It featured handmade Honiton lace from a small village called Beer, an attempt by Victoria to support the struggling lace industry in the country. Rather than wearing a jeweled tiara, the queen chose a crown of orange blossoms and myrtle, which would have been more reflective of a commoner’s wedding attire rather than that of an upper-class socialite. This elegant take on a simple style endeared Victoria to her subjects and made her seem more down-to-earth than other royals before her.
With no elaborate jewelry, bright colors, or gold embroidery, Victoria’s wedding dress was a decided departure from queens and princesses who had gone before. Her wedding was frugal, comparatively speaking, and conveyed her good sense and prudence as a ruler as well as her love for Albert, uncluttered by heirloom jewels or fur trimmings.
Illustrations of the royal couple were widely circulated, and every newspaper column and women’s magazine reported on Victoria’s dress for months on end. Both British subjects and American onlookers were enraptured with the Queen, romanticizing her relationship with Albert as one of love and domestic bliss. As images of Queen Victoria’s wedding gown spread across Europe and North America, the upper classes began to copy her style. Many brides opted for white wedding dresses inspired by Victoria, often with embroidered silk, lace, or floral detailing.
Queen Victoria presented an image of simplicity and good taste with her bridal wear, but ironically, the white wedding dress became a symbol of conspicuous consumption. It caught on in society precisely because it was quite expensive for the average person. A white dress that would dirty easily through any kind of work or even the tasks of daily living would be impractical for all but the richest members of society.
A Reversal of Values
In 1849, Godey’s Lady’s Book, a popular women’s magazine in the United States, claimed that “custom has decided, from the earliest ages, that white is the most fitting hue [for brides], whatever may be the material. It is an emblem of the purity and innocence of girlhood, and the unsullied heart she now yields to the chosen one.” This was a dubious claim to anyone who looked a little closer—white had very recently become the color of choice for wedding dresses, and it was clearly worn as a show of wealth rather than a symbol of purity.
In addition, at the time, the color blue rather than white was associated with purity and with the Virgin Mary. Mary was often depicted wearing a blue robe (which, as it happens, was because blue was the color worn by an empress in the Byzantine Empire). Up until white wedding dresses came into fashion, many women specifically chose to wear blue dresses because of their association with purity.
Godey’s Lady’s Book used some inaccurate history and creative hyperbole to promote the white wedding dress, and it’s clear that color symbolism is not always straightforward. However, the symbolism of the white dress did in fact shift to an association with purity and innocence. Though these values seem outdated to many today, they make sense in light of traditional cultural expectations of a young woman’s conduct before marriage.
Industry and War
With the industrial revolution and subsequent innovations in manufacturing, fashionable clothing in general became more available to the average person. Bleaching techniques allowed for the production of cheaper fabric in a true white color, rather than the cream or eggshell hues that were produced in the nineteenth century. Synthetic fibers, developed in the late 1800s and early 1900s, were also used to create cheaper and more durable clothing. Better laundering techniques allowed for washing and preserving white clothing for longer than ever before. All these advances allowed more women to buy a white wedding dress specifically for their wedding.
It was not until the end of World War II that a white wedding dress was expected for most brides. Wartime rationing was over, and there was increased prosperity throughout the United States—what a delight it was to buy a nice dress to celebrate a special occasion! Hollywood movies also featured brides walking down the aisle in white, contributing to the color’s popularity.
The white wedding dress was thus recognized as tradition for all social classes in the mid-twentieth century, and white is the color of choice in many countries around the world, from Australia to Singapore to Italy. In fact, Chinese brides often pose for a wedding photoshoot in a Western-style white dress, then wear a traditional red dress on their actual wedding day.
Actors, princesses, and other celebrities continue to influence wedding dress fashion in Europe and North America. Princess Diana’s elaborate, puffy-sleeved wedding gown reflected the trends in 1981, and Duchess Kate Middleton’s lace sleeves became wildly popular after her wedding in 2011. Major designers immediately scrambled to emulate the royal wedding dresses in their own designs.
Around the 1960s, some brides began to wear more colorful frocks, largely inspired by Elizabeth Taylor’s green, yellow, and rainbow wedding dresses (she was married eight separate times). Today, a small subset of brides wear colored or black dresses or even floral prints. Though colored dresses continue to become more common, none of these trends has yet gained enough momentum to displace the white wedding dress, and the white dress remains engraved in the popular idea of a wedding in the Western world.
Extra Credit: Watch 100 Years of Fashion: Wedding Dresses for fascinating look at wedding dresses in the past century.
Why is sliced bread our reference for things that are new and incredible? The answer involves wrapped bread,banned bread, and Wonder Bread.
“It’s the best thing since sliced bread!” you might proclaim about fast wi-fi, a meal-delivery service, or a new TV show. This phrase is used to describe a remarkable, revolutionary innovation. The Kansas City Star noted that “the phrase is the ultimate depiction of innovative achievement and American know-how.” When it comes to sliced bread itself, it’s an innovation that we now take for granted but once seemed a marvel of modern mechanization.
Rohwedder’s Bread Slicing Machine
The earliest bread-slicing machines appeared in America in the 1860s and used parallel blades to slice bread. However, they sat on shelves, mostly unused and unnoticed for decades. In the meantime, other machinery was developed that could produce loaves of bread of uniform shape and size.
A jeweler from Iowa named Otto Frederick Rohwedder invented the first electric bread-slicing machine that worked in tandem with modern production methods. He built a prototype that was, sadly, destroyed in a fire in 1912. Rohwedder finished the machine in 1917, but many companies refused to buy it because they were concerned that consumers wouldn’t be interested in pre-sliced bread—weren’t people just fine cutting it themselves? Additionally, they worried that the bread would crumble and grow stale too quickly if it were sliced. This problem was solved by wrapping the bread in wax paper immediately after it was sliced.
The bread-slicing machine was finally put into service in 1928 by the Chillicothe Baking Company in Chillicothe, Missouri. The ChillicotheConstitution-Tribune ran a full-page ad on July 6, 1928, to spread the word about the innovative new product:
The ad noted that this gave Chillicothe Baking Company “the distinction of being the first bakers in the world to sell sliced bread to the public.”
And notice that the greatest thing before sliced bread was wrapped bread! Bread had been mass-produced, wrapped in wax paper, and sold to grocery stores since about the 1820s—after then, families no longer had to make several loaves of homemade bread every week. The combination of sliced and wrapped bread would prove to be an even more successful innovation for the bread industry.
The Wonder of Sliced Bread
Pre-sliced bread quickly gained momentum, and within two years of its introduction, 90% of the bread sold in stores was sliced. It was convenient and consistent, and customers loved it. Other inventions such as the toaster reinforced and adapted to the popularity of uniformly sliced bread.
In addition, bread consumption increased because it was so much easier to eat more bread—the knife no longer stood as a barrier between an American and a slice of bread with jam. In fact, the consumption of butter, jam, and other spreads increased as well as people ate more slices of bread, more frequently. Sliced bread eased the burden on mothers who formerly had to slice whole loaf of bread in the morning to make toast for breakfast and sandwiches to pack for lunch for a growing family.
Other bread companies caught the wave of sliced bread and experimented with similar campaigns and further innovations. Some sold extra thick or extra thin slices of bread (in fact, loaves of bread are still sold according to slice thickness in the United Kingdom). In 1933, one bakery offered thick and thin slices in the same loaf and marketed it as “the first improvement since sliced bread.” Rohwedder also sold his patent for the slicing machine in 1930, and other bakeries and inventors improved upon the model.
Wonder Bread followed Chillicothe’s lead with marketing campaigns advertising “a truly wonderful bread,” constantly talking up its uniform, snowy white loaves that were now pre-sliced thanks to the company’s own slicing machines. Whereas Chillicothe was a smaller-scale bakery, Wonder Bread produced the first commercially manufactured sliced bread in America and used delivery trucks to ship bread around the nation. By the 1930s, Wonder Bread had built its brand upon its uniform, pre-sliced loaves, which became an icon of the enormous manufacturing capacity of United States.
Now for a horror story. In 1943, during the height of World War II, the U.S. government issued a ban on sliced bread due to wartime shortages and a need to focus on manufacturing weaponry. Sliced bread required more wrapping materials, and there had been a 10 percent rise in the price of flour, so the ban was supposed to reduce waste and save money. Banned bread! What an outrage for carbohydrate-loving consumers and mothers who were already harried for time! The ban was lifted two months later due to widespread outcry. It seems that sliced bread was too much a fact of American life at that point to be taken away. Besides, the ban also had but a small effect on savings, and many bakeries were hard-pressed to comply.
Mechanization vs. Back to Nature
One of the most significant effects of the industrial revolution was the mechanization of everyday life. The ease and convenience of pre-sliced bread is a seemingly small time-saver that that yields a great return. Many rushed mornings have been spared from further chaos by a loaf of bread ready for the toaster or the lunchbox.
More recently, dissatisfaction with highly processed foods and modern manufacturing methods has caused some people to return to making more food at home. Whether due to health reasons, countercultural currents, or environmental concerns, more Americans are turning to nonuniform, homemade, slice-by-yourself bread—just like great-grandma used to make.
So, what is the best thing since sliced bread? Perhaps it’s a loaf of homemade whole wheat bread, as a foil to mass consumerism—or perhaps it’s a new smartphone, a better mousetrap, or gluten-free cinnamon raisin bread.
Where does the children’s game duck, duck, goose come from? The answer involves Swedish immigrants, imaginative children around the globe, and a rainbow of aquatic birds.
Duck, duck, goose is a popular children’s game in the United States that you’ve likely played many times. But here’s a refresher on how it’s done:
To play duck, duck, goose, players sit on the ground in a circle. One player, who we will call the “runner,” walks around the outside of the circle, tapping each participant’s head while saying the word “duck.” At some point, the runner says “goose” as he or she taps a target player, and then the “goose” must chase the runner around the circle until the runner reaches the “goose’s” former seat and takes his or her place.
But just try telling that to a Minnesotan. In Minnesota, “duck, duck, gray duck” is the game of choice. Besides calling the target a gray duck instead of a goose, the person who is “it” also adds colors to each duck in the circle. “Red duck, purple duck, blue duck, grrrrr . . . een duck!” he or she might say, until finally naming the gray duck.
This game was introduced by Swedish immigrants who put down roots in the United States in the nineteenth and early twentieth centuries. During this time, about 1.3 million Swedes relocated to America, primarily settling in the Midwest along with other Scandinavian immigrants. The Swedish were driven by population growth, poverty, and religious repression and attracted to America by greater economic opportunity and political freedom. Along with them, they brought various traditions that have influenced American culture.
Minnesota is the only state that plays the “gray duck” way in the United States, but both versions came from Sweden. The Swedish name for the game that immigrants brought to Minnesota was anka-anka-grå-anka, “duck-duck-gray duck.” Swedish immigrants who arrived in other states brought with them a variant called anka-anka-gås, or “duck-duck-goose.”
It’s unclear exactly why duck, duck, goose gained so much traction in 49 of the 50 states in America. Children’s games are often passed down orally, which makes it hard to pin down the exact historical origins, and they are frequently changed in imaginative ways by different groups of children as they pass them along.
Interestingly, children’s games are strikingly similar around the world, and duck, duck, goose is no exception.
For example, a book detailing children’s games in England, Scotland, and Ireland in the late 1800s described a game called “kiss in the ring” that involved one player walking around the other players sitting in a ring and tapping each one on the head with a handkerchief, saying “Not you, not you, not you” until reaching the desired target—“But you!”—and the chase would commence.
A game in India called rumaal chor has one player, the “thief,” run around a seated circle of participants who extend their arms behind them. The thief drops a handkerchief along the way and whoever grabs it must jump up and catch the thief before he or she sits down.
In Chile, children play corre, corre la guaraca by sitting in a circle with their eyes closed as one child runs around the outside with a handkerchief. Participants are bopped on the head if they attempt to look around. The runner must place the handkerchief on one child’s back to mark the child as the guaraca (which is a nonsense word) without him or her noticing, then run a full circle and sit down before the guaraca notices and tags the runner.
In some versions of the game, the participants imagine that whoever is “it” is contagious with some kind of disease, and other participants want to avoid their touch so they don’t get “sick.” Players in Italy avoid the runner like the plague, those in Madagascar are afraid of leprosy, and participants in Spain flee from fleas.
The global popularity of similar children’s games goes to show that there is not necessarily one “true” source of a particular game. Any game may emerge in similar forms in different places as children create new ways to play together.
Gomme, Alice Bertha. The Traditional Games of England, Scotland and Ireland: With Tunes, Singing Rhymes and Methods of Playing According to the Variants Extant and Recorded in Different Parts of the Kingdom. (London: Nutt, 1894–1989), pp. 308–309. The Internet Archive.