The Shelby Light Bulb

Dangling from the ceiling of a California firehouse is a bulb that’s burned for 989,000 hours – nearly 113 years. Since its first installation in 1901, it has rarely been turned off, has outlived every firefighter from the era, and has been proclaimed the “Eternal Light” by General Electric experts and physicists around the world.

Tracing the origins of the bulb — known as the Centennial Light — raises questions as to whether it is a miracle of physics, or a sign that new bulbs are weaker. Its longevity still remains a mystery.

A Brief History of the Light Bulb

Thomas Edison worked on improving carbon filaments. By 1880, through the utilization of a higher vacuum and the development of an entire integrated system of electric lighting, he improved his bulb’s life to 1,200 hours and began producing the invention at a rate of 130,000 bulbs per year.

In the midst of this innovation, the man who would build the world’s longest-lasting light bulb was born.

The Shelby Electric Company

Adolphe Chaillet

Adolphe Chaillet was bred to make exceptional light bulbs. Born in 1867, Chaillet was constantly exposed to the burgeoning light industry in Paris, France. By age 11, he began accompanying his father, a Swedish immigrant and owner of a small light bulb company, to work. He learned quickly, garnered an interest in physics, and went on to graduate from both German and French science academies. In 1896, after spending some time designing filaments at a large German energy company, Chaillet moved to the United States.

Chaillet briefly worked for General Electric, then, riding on his prestige as a genius electrician, secured $100,000 (about $2.75 million in 2014 dollars) from investors and opened his own light bulb factory, Shelby Electric Company. While his advancements in filament technology were well-known, Chaillet still had to prove to the American public that his bulbs were the brightest and longest-lasting. In a risky maneuver, he staged a “forced life” test before the public: The leading light bulbs on the market were placed side-by-side with his, and burned at a gradually increased voltage. An 1897 volume of Western Electrician recounts what happened next:

Chaillet’s original patent:

“Lamp after lamp of various makes burned out and exploded until the laboratory was lighted alone by the Shelby lamp — not one of the Shelby lamps having been visibly injured by the extreme severity of this conclusive test.”

Shelby claimed that its bulbs lasted 30% longer and burned 20% brighter than any other lamp in the world. The company experienced explosive success: According to Western Electrician, they had “received so many orders by the first of March [1897], that it was necessary to begin running nights and to increase the size of the factory.” By the end of the year, output doubled from 2,000 to 4,000 lamps per day, and “the difference in favor of Shelby lamps was so apparent that no doubt was left in the minds of even the most skeptical.”

Over the next decade, Shelby continued to roll out new products, but as the light bulb market expanded and new technologies emerged (tungsten filaments), the company found itself unable to make the massive monetary investment required to compete. In 1914, they were bought out by General Electric and Shelby bulbs were discontinued.

The Centennial Light

Seventy-five years later, in 1972, a fire marshall in Livermore, California informed a local paper of an oddity: A naked, Shelby light bulb hanging from the ceiling of his station had been burning continuously for decades. The bulb had long been a legend in the firehouse, but nobody knew for certain how long it had been burning, or where it came from. Mike Dunstan, a young reporter with the Tri-Valley Herald, began to investigate — and what he found was truly spectacular.

Tracing the bulb’s origins through dozens of oral narratives and written histories, Dunstan determined it had been purchased by Dennis Bernal of the Livermore Power and Water Co. (the city’s first power company) sometime in the late 1890s, then donated to the city’s fire department in 1901, when Bernal sold the company. As only 3% of American homes were lit by electricity at the time, the Shelby bulb was a hot commodity.

In its early life, the bulb, known as the “Centennial Light,” was moved around several times: It hung in a hose cart for a few months, then, after a brief stint in a garage and City Hall, it was secured at Livermore’s fire station. “It was left on 24 hours-a-day to break up the darkness so the volunteers could find their way,” then-Fire Chief Jack Baird told Dunstan. “It’s part of another era in the city’s past [and] it’s served its purpose well.”

Though Baird acknowledged that it had once been turned off for “about a week when President Roosevelt’s WPA people remodeled the fire house back in the 30s,” Guinness World Records confirmed that the hand-blown 30-watt bulb, at 71 years old, was “the oldest burning bulb in the world.” A slew of press followed, which saw it featured in Ripley’s Believe it or Not, and on major news networks.

Aside from the 1930s fire house remodel, the bulb has only lost power a few times — most notably in 1976, when it was moved to Livermore’s new Station #6. Accompanied by a “full police and fire truck escort,” the bulb arrived with a large crowd eager to see it regain power, but, as recalled by Deputy Fire Chief Tom Brandall, “there was a little scare:”

“We got to new location and the city electrician installed the light bulb and made connection. It took about 22-23 min, and [the bulb] didn’t come back on. The crowd gasped. The city electrician grabbed the switch and jiggled it; it went on!”

Once settled, the bulb was placed under video surveillance to ensure it was alive at all hours; in subsequent years, a live “Bulb Cam” was put online. At one point, the bulb’s groupies (9,000 followers on FB) received another scare when it lost light.

At first it was suspected that the light had finally met its demise, but after nine and half hours, it was discovered that the bulb’s uninterrupted power supply had failed; once the power supply was bypassed, the bulb’s light returned. The 113-year-old bulb had outlived its power supply, just as it had outlived three surveillance cameras.

Today, the bulb still shines, though, as one retired fire volunteer once said, “it don’t give much light” (only about 4 watts). Owning a frail piece of history comes with great responsibility: Livermore firefighters treat the little bulb like a porcelain doll. “Nobody wants that darn bulb to go out on their watch,” once said former fire chief Stewart Gary. “If that thing goes out while I’m still chief it will be a career’s worth of bad luck.”

“They Don’t Make ‘Em Like They Used To”

Everyone from Mythbusters to NPR has speculated on the reasons for the Shelby bulb’s longevity. The answer, in short, is that it remains a mystery – Chaillet’s patent left much of his process unexplained.

In 2007, Annapolis physics professor Debora M. Katz purchased an old Shelby bulb of the same vintage and make as the Centennial Light and conducted a series of experiments on it to determine its differentiation from modern bulbs. She reported her findings:

“I found the width of the filament. I compared it to the width of a modern bulb’s filament. It turns out that a modern bulb’s filament is a coil, of about 0.08 mm diameter, made up of a coiled wire about 0.01 mm thick. I didn’t know that until I looked under a microscope. The width of the Shelby bulb’s 100-year-old filament is about the same as the width of the coiled modern bulb’s filament, 0.08 mm.”

The Lightbulb Cartel

Light bulb companies like Shelby once prided themselves on longevity – so much so, that the durability of their products was the central focus of marketing campaigns. But by the mid-1920s, business attitudes began to shift, and a new rhetoric prevailed: “A product that refuses to wear out is a tragedy of business.” This line of thought, termed “planned obsolescence,” endorsed intentionally shortening a product’s lifespan to entice swifter replacement.

In 1924, Osram, Philips, General Electric, and other major electric companies met and formed the Phoebus Cartel under the public guise that they were cooperating to standardize light bulbs. Instead, they purportedly began to engage in planned obsolescence. To achieve this the companies agreed to limit the life expectancy of light bulbs at 1,000 hours, less than Edison’s bulbs had achieved (1,200 hrs) decades before; any company that produced a bulb exceeding 1,000 hours in life would be fined.

Until disbanding during World War II, the cartel supposedly halted research, preventing the advancement of the longer-lasting light bulb for nearly twenty years.

Whether or not planned obsolescence is still on the agenda of light bulb manufacturers today is highly debatable, and there exists no definitive proof. In any case, incandescent bulbs are being phased out worldwide: Since Brazil and Venezuela began the trend in 2005, many countries have followed suit (European Union, Switzerland, and Australia in 2009; Argentina and Russia in 2012; the United States, Canada, Mexico, Malaysia, and South Korea in 2014).

As more efficient technologies have surfaced (halogen, LED, compact fluorescent lights, magnetic induction lights), the old filament-based bulbs have become a relic of the past. But perched up in the white ceiling of Livermore’s Station #6, the granddaddy of old-school bulbs is as relevant as ever, and refuses to bite the dust.

Source document written in 2014.

History of St. Paul’s Lutheran Church

I was unable to find a picture by itself of the current church/school but you can see it in the opening picture in this video. Overhead view of the current church, with the school I attended on the right; I was baptized in this church in 1953 and was confirmed in April 1967. The entire wing on the left and the parking lot was added after I left the area.

In the year 1865, a group of members of the Evangelical Lutheran St. Paul congregation at Ixonia, WI gathered together with the desire to raise their cildren near a church and school. This caused them to consider emigration. Pastor Hoeckendorf, the minister of this congregation, at that time had relatives who lived near West Point, NE. So they got the idea to send scouts into this area. They wanted some trustworthy people to check everything out right there on location.

The info in this post was taken from this booklet.

They entrusted this important matter to “Father” Braasch, “Father” Wagner, and John Gensmer. These men departed for NE and, since the area surrounding West Point was already more or less settled and the whole group couldn’t possibly also settle there, they ventured further north over the wild plains of Nebraska until they came to the area which is now Norfolk.

They found that the land was fertile, the water drinkable, and wood was also found on the North Fork and the Elkhorn rivers. Very pleased with their finding, they joyfully returned to Ixonia and delivered the good news.

Pic from internet

On May 23, 1866, it was time for the old pioneers to leave their homes and strike out toward their new destination. It was a difficult time since many heartrending goodbyes were required – parents to their children, children to their parents, brothers and sisters parted, relatives and friends shook hands for the last time. The long journey was made in “prairie schooners” pulled by horses and oxen. In 3 caravans, 53 wagons moved through the uncultivated terrain, accompanied by cattle and sheep. Along the way, they encountered great difficulties, such as crossing rivers without bridges and maneuvering through swamps. Some days they had to stop to wash clothes and bake bread and on Sundays, they observed regular church services, which were led by Father Braasch, the leader of the whole train.

Around the 12th of July 1866, the members of the new German Settlement arrived in close proximity to the present-day Norfolk. After the land was measured and raffled off, everybody moved onto their allotted properties from 17-20 July.

Note: You may need to enlarge the pic to see – on the left just over half-way down, you will see the name “William Duhring.” (My brother inherited the farm and now his children have inherited it from him – Chris gets the land in order to keep it in the Deering name, ‘Nette gets the house.) That was my birth grandfather, Arnold Deering’s Father (Grandpa changed the spelling of his last name in order to appear less German, probably due to WWII, I expect). If you look up further towards the center, close to the river, you will see the name “Martin Raasch,” my adopted great-great-grandfather.

I’m not sure when this picture was taken – clearly not in 2007 – but these were the 4 remaining founders still alive at that time. August Raasch, my adopted great-grandfather, was the first postmaster in Norfolk. He was wounded at Gettysburg and carried shrapnel in his back until he died; in later years, he was basically an invalid but with 12 children (mostly boys), he had plenty of help on the farm.

Of course, it took time to build homes and barns so, in the meantime, they either built one-room log cabins or sod houses.

The first services of this new settlement took place in a shed on the North Fork of the Elkhorn River. Shrubs and branches covered the roof to provide shade and the dirt floor was covered with hay. For the rest of that first summer, they held church services in this shed. I don’t know when the first real church, a log building 24 X 30, was built – there was no altar or chancel and the benches consisted of boards which were laid on wooden blocks. Occasionally the boards would fall over when the people rose during the service. This church was used until the year 1878; in 1876, the congregation had bought 12 acres from Pastor Hoekendorf for $120.

The first parsonage was built in 1878 and at the April meeting that year, the congregation decided to build a new church. The new one would be 36 X 50 and cost approximately $1,405. The number of school children increased significantly so the congregation found it necessary to hire a regular teacher and build a school house. Since they already had a teacher, a house for him was also required, which was constructed in 1884.

Although the church building was finished, the interior was bare – no chancel, altar, benches or organ. Father Braasch made the initial contribution when he paid for an altar and chancel for the church, providing an example for the wealthy people among the members. The congregation bought the benches and, in 1884, they acquired a pipe organ (the organ still remains in the current church, as you will see in the interior picture). Since the church did not have enough seating for the attendants and the school also needed another classroom, the congregation voted unamimously to build a new church. During a meeting on January 21, 1907, the decision was made to build a brick building.

Architect Stitt created the plans and specifications for the beautiful building, which was designed in the gothic style of the 13th century. The cost of the building and interior came to about $24K. The cornerstone was laid in August of 1907 and the dedication took place on May 3, 1908. The old church was remodeled to serve for school functions and weekly catechism.

In July 1916, it had been 50 years since the founding fathers of our congregation arrived on these grassy plains. Since the congregation did not want to let this day pass without an expression of gratitude to God, they decided to celebrate their 50th anniversary on July 16, 1916. For this event, they had the interior of the church painted – the finished work is a credit to the master, Mr. Art Reiman of Milwaukee, and is a perfect work of art.

At the end of the 1st row is my birth grandmother, Marie Deering (she loved Hitler, btw); in the 2nd row, you will see my grandfather, Arnold, as well as Ernest Raasch, my adopted grandfather.
Esther Raasch was my adopted grandmother – Ernest died around the time I was born. He was a Nebraska State Senator. My birth mother lived with them for a period of time while she was in HS – she and my adopted Mom were close friends.

17TH AMENDMENT WAS A HUGE MISTAKE

Since it was ratified in 1913, we have been losing states’ rights over time and it is getting faster. Today we see the abuses of the federal government destroying the powers of the state at an alarming rate.

The 17th Amendment must go if we are ever to rein in the abuses of the federal government. The good news is that we do not need a Constitutional amendment to do that. Here’s why…

Article V of the US Constitution spells out the process for amending the Constitution. The last line of Article V seems prophetic in that it spells out that the 17th Amendment could not happen unless 100% of the states agreed. That line reads, “and that no State, without its Consent, shall be deprived of its equal Suffrage in the Senate.”

What this clearly means is that no State (as an entity – that is why it is capitalized) can have its representation in Congress taken from it without consent. Taking that representation away from the state as an entity and placing it with the people of that state clearly takes away any representation in the US Senate for the states. Therefore, it was not passed by all states so some (a total of 12) did not consent.

Some might argue that the 36 that did ratify it would be bound by it. This would violate equal representation, therefore could not stand.

As with all governments, it becomes necessary, from time to time, to reacquaint ourselves with its basic mechanics of operation. The Founding Fathers gave posterity a written constitution to aid in this process. When there are doubts as to its meaning, one must study its original intent to discern proper application, for “the intent of the Lawgiver is the Law.”

Current events in this nation have provoked citizens and scholars to perform this assessment — to “retrace our steps” — in yet another area: the principle of federalism. Simply defined, federalism is ‘a system that combines States retaining sovereignty within a certain sphere with a central body possessing sovereignty within another sphere, and a third sphere where concurrent jurisdiction (exists].’

After years of silence on the matter, a resurgence of interest in federalism became evident. President Reagan’s “New Federalism,” “The Federalist Society,” and a report on federalism issued by the Domestic Policy Council were just some of the manifestations of this increasing concern.

The reason for the current interest is that America is reaping the fruit of centralized government. Contrary to the Founding Fathers’ original vision of separate spheres of jurisdiction between the people, the states, and national government, our current system is now dominated by the national government.

The United States Constitution, as drafted by the Founding Fathers, clearly enumerated the limited powers of the national government. All other powers were reserved to the states or the people. The 10th Amendment affirms this noting: ‘The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.’

The separate spheres of jurisdiction of the national and state governments have gradually been eroded. The national government has increasingly usurped the reserved power of both the people and the states. It has been documented that:’States, once the hub of political activity and the very source of our political tradition, have been reduced — in significant part — to administrative units of the national government ….’

As a result of this erosion process, both the national government and the state governments are crippled in their effectiveness. The national government, having taken on too much power, is unable to properly administer all the areas it has arrogated unto itself. On the other hand, the state governments are impotent in legislating and executing the will of the people because they are subject to unpredictable subjugation by the national government.

Our founding document, the Declaration of Independence, proclaims as self evident the proposition that “all men are … endowed by their Creator with certain unalienable rights,” and that “to secure these rights, governments are instituted among men.” When state governments so instituted become impotent, then it is their right and duty to reacquire the appropriate power in order to fulfill the purpose for which they were originally established.

History of the Bath Tub

Search the web, and you’re sure to read that America’s first bathtub was installed in 1842—December 20, to be exact. It would be nice if such a mercurial vessel had so neat a beginning—even H.L. Mencken, the newspaperman who concocted this hoax as an uplifting wartime news story, would agree. What is true is that no accessory embodies the metamorphosis of bathing equipment (from moveable furniture to plumbed-in-place fixtures) or helps define the use and look of a bathroom in any era as much as the bathtub.

Antebellum Scrubs

Before indoor plumbing, bathtubs—like chamber pots and washbowls—were moveable accessories: large but relatively light containers that bathers pulled out of storage for temporary use. The typical mid-19th-century bathtub was a product of the tinsmith’s craft, a shell of sheet copper or zinc.

“Late 1800’s Zinc and Cast Iron Bath Tub With Oak Trim”

In progressive houses equipped with early water-heating devices, a large bathtub might be site-made of sheet lead and anchored in a coffin-like wooden box.

Later, there were ingenious (though ultimately impractical) hideaway alternatives, like the portable canvas tub (similar to a pot-bellied cot), or the Mosely folding bath tub—an armoire-like contraption with a hinged door that pulled down like a Murphy bed to reveal a bathing saucer.

The Mosely Folding Bath Tub pulled down like a Murphy bed.

However, for decades, the bathtub most Americans knew best was the one available in a 1909 hardware catalog: a tinware plunge bath with wood-covered bottom painted in Japan green (a type of pre-1940 enamel paint).

As running water became more common in the latter 19th century, bathtubs became more prevalent and less portable. Though copper was still used for wood-enclosed tubs as late as the 1910s, it more commonly appeared as a liner for steel-cased tubs, rimmed in oak or cherry, that stood on bronzed iron legs.

This wood-encased period galvanized tin tub is in Astoria, Oregon’s 1885 Flavel House museum.

Cast iron—the all-purpose material of the Victorian era—had been poured into sinks and lavatories since the late 1850s, and by 1867 the famous J.L. Mott Iron Works was finding a ferrous niche in the bathtub market as well.

However, the big catch with all of these conveniences was corrosion. Copper and zinc discolored readily around water and soap, and the seams of sheet metal were hard to keep clean at all. Iron and steel, of course, rusted eventually, even under the most meticulous coat of paint.

Glaze Crusades

A china-like glaze seemed to be the ideal, obvious solution, but producing a vitreous skin on an object the scale of a tub was not so simple. Though cast iron sinks were porcelain enameled, iron bathtubs were a far more complex shape, and when filled with hot water, they could expand more than the coating, risking delamination.

In the 1850s, British artisans cracked the tub-coating code by taking a different tack: all-ceramic tubs with a glazed surface. Because the tubs were both fragile and heavy, they were iffy for export, but the idea found a market on English shores, and by the 1890s, solid porcelain tubs were being fired up by manufacturers like Trenton Potteries.

An ordinary-style tub—sloped at the head, flat and plumbed at the foot—was the most common, and affordable, early porcelain model.

The solid porcelain tub scratched many itches. Besides satisfying the need for a seamless, smooth, washable surface that wouldn’t rust, it provided a continuous, roll-over edge around the perimeter of the basin. Indeed, one of the subtle attractions of the porcelain tub was its sensuous, smooth curves and zaftig proportions. Whether it stood on bulbous ceramic legs or muscular sides that ran to the floor (thereby eliminating unsanitary hidden spaces), the porcelain tub was a study in robust modeling. Ads from the 1910s asked, “Why shouldn’t the bathtub be part of the architecture of the house?”

Seemingly the ultra-modern bathing, solid porcelain had its downside. For one thing, such tubs were dauntingly heavy and equally pricey. In 1909, prices ran from $180 for a 4 1/2′-long model to $255 for a massive 6 1/2-footer—this at a time when a steel-cased footed tub could be had for around $25. Plus, some bathers felt the pottery mass absorbed too much heat from the water, making it expensive to use.

High-Tech Tubs

Drawbacks aside, the solid porcelain tub remained the Cadillac of the bath industry into the 1920s and the hallmark of a high-end bathroom. Indeed, before 1910, bathrooms in and of themselves were often status symbols. In an era when houses with running water and waste piping were new and modern, a single bathroom with lavatory, flushing toilet, and fixed tub was a sign of progressive thinking and an essential step in the march toward better hygiene. What’s more, the bathrooms of the wealthy were not so much places of daily cleanup and dressing, but therapeutic laboratories akin to personal spas. The shower we now associate with a daily spritz was frequently a stand-alone cage of multiple sprays designed for skin or kidney stimulation, while tubs were dispersed around the room for soaking one or more parts of the body.

Roman tubs with nearly vertical, sloping round ends were thought to look more balanced and elegant in bathrooms, and usually came with faucets mounted on a long side.

As multiple-fixture, high-tech bathrooms started to evaporate after World War I (along with the large houses that made them possible), the new paradigm for up-to-date ablution became the porcelain-enameled, cast-iron, footed tub—the ubiquitous clawfoot type still at work for thousands of bathers today.

The Cast-Iron Tub

The J.L. Mott Iron Works was among the first to solve the porcelain-on-iron puzzle in the late 1880s with better techniques for preparing the iron and firing the coating, and when production improvements reduced costs in the 1920s, the cast-iron tub soon took over the bathroom. The typical tub style was the ordinary, a round-bottomed trough with a sloping head and a vertical foot holding water inlets and outlets. The other common style was the Roman, with flat sides and bottom, and identical (nearly vertical) sloping, rounded ends.

Antique Cast Iron Tub

Fancy, upscale lavatories could include both a sitz (at left) and foot bath (at right) to complement the bathtub and state-of-the-art ribcage shower, per a 1912 Standard Sanitary catalog.

The Built-In Tub

For a new century increasingly on the alert for germs, the only thing better than a tiled-in recess tub was one shipped this way straight from the factory. Casting one-piece tubs with a rim that extended down to the floor in an apron wasn’t easy, but by 1911, the Kohler Company, followed swiftly by its competitors, introduced the built-in tub—still a bathroom standard today. Made with one enclosed side (or one side and an end), the built-in tub was not only efficient in its own right, but as a 5′-long model that spanned the walls of the typical 5′ square bathroom, it became the cornerstone of the modern, functional Jazz Age bathroom trinity: wall-hung lavatory, water closet, and tub-and-shower combo.

Color Craze

Like Henry Ford, who promised auto buyers any color they wanted so long as it was black, sanitary ware manufacturers were at first color-blind to anything but white. White was not only the color of sanitation, making it easy to spot grime and therefore clean, it was also the optimal color to produce reliably from item to item.

Just like with the auto industry, however, all that began to change in the late 1920s. Once the bathroom reached a plateau as an efficient, hygienic cleansing hospital, it began to be viewed as a vehicle for design and household beauty, and around 1929, color came into the bathroom in a big way.

This inviting bathroom suite, featuring tan vitrolite walls and colorful Spring Green fixtures—including a separate, petite dental sink—appeared in a 1939 Kohler brochure.

Pigmenting the vitreous finish in fixtures—at first in light pastels, then in deeper hues like royal blue, Ming green, and Chinese red—brought color to the bathroom in solid swaths far more dramatic and permanent than any paint or tile.

Always key bathroom players by dint of their sheer size and function, bathtubs became ever more pivotal when they moved away from white. As color put a design spin on fixtures in the 1930s and ’40s, they began to look—once again—like furniture, with lavatories resembling tables and toilets approximating chairs. In this light, tubs might stand in for beds, especially when detailed with the rectangular outlines popular in the Art Moderne era and in velvety colors of rich maroon or black. It was a long way from the tin tub that had been hauled out of a closet only a generation or two before.

THE RADIUM (GHOST) GIRLS

On the 20th of April, 1902, Marie and Pierre Curie successfully isolated a brand new element: radium, after years of hard work. At the time, it was believed that this new material might have all kinds of beneficial properties. So radium was swiftly incorporated into many products, ranging from makeup to ceramics to health tonics and jewelry. What wasn’t understood at this time was that radium was, in fact, quite deadly. A year before the publishing of her book “Radioactivity,” Marie died. Before her death, she had become aware of the great perils of radiation, which is also what took her life.

By the time Marie died, radium had taken the globe by storm. Radiation was something that wasn’t well understood at the time but had positive associations. It was branded under names such as “cure for the living dead” and “perpetual sushine. And perpetual is exactly what it turned out to be!

As the world descended into World War One, another use for radium came to the forefront: when infused into paint, it would make that paint glow in the dark. This made it an ideal material for coating watch faces, control panels, and instrument dials. Radium provided much-needed illumination for soldiers in the field without relying on other bulky equipment. From 1917 the demand for radium-coated dials skyrocketed, which was good news for the United States Radium Corporation. The company had been in the business of extracting and processing uranium for a few years. Now it expanded to mixing and applying radium-infused paint, a substance which they called“undark.”

Women using radium paint on alarm clock at a factory in 1932 | Picture Credits: Daily Herald Archive

It was no surprise that many local people were employed as dial painters. The dial painters would be supplied with radium paint and freshly stamped dials and had to use paintbrushes to strategically apply radium to the dial parts that needed to glow. Precision was required, so workers were instructed to lick the tips of their brushes in between each application to bring the bristles to a fine point.

For precision, the girls would soften the brush between their lips, thereby ingesting radium in the process | Photo Credits: Nontoxic Prints

The United States Radium Corporation workers had access to radium for free. They used it to paint their teeth and nails to give them a pleasant glow before heading out to dances in the evenings. Years passed. Hundreds of thousands of dials were painted and shipped out. The war eventually came to an end, much to the relief of the general population. But all was not well for the ex-workers of the United States Radium Corporation.

The original site of the Radium Luminous Materials Corporation in Orange, NJ Richard Harbus

Slowly, one by one, dial painters were falling ill. As the 1910s became the 1920s, hundreds of women who had worked as dial painters started noticing pain in their teeth and jaws. Many were having to visit their dentists regularly and were losing teeth with every visit. They were constantly exhausted, and in some cases, it was found that their jawbones were riddled with holes, reduced to a brittle hollow honeycomb. Despite this alarming wave of sickness, few were able to persuade anyone to take their ailments seriously. These female workers came to be known as the ‘Radium Girls.’

Radium Jaw, a certain sign of death in victims | Photo Credits: All That Is Interesting

When 22-year-old Molly Maggia passed away after experiencing years of pain in her jaw and teeth, her condition was described as syphilis. The complaints of many other women were glossed over with the same explanation, despite symptoms that pointed towards something more sinister. It was 1925 before any of the workers came to understand the devastating effect radium had on their bodies.

The 19-year-old woman started working at the Radium Luminous Materials Corp. in Orange, NJ, in 1917, and at first reveled in her job. It was lucrative — plus painting glow-in-the-dark radium on soldiers’ wristwatch faces meant she and her young female co-workers were helping in the war effort.

Grace Fryer had once been a dial painter. Now her body was quite literally falling apart. The bones of her spine crumbled and required a metal brace. Tumors and abscesses sprouted in her jaw, and she was in constant pain. The radium she had ingested while working had riddled her with cancer and weakened her bones. It would soon end her life.

Furious, Grace and four of her colleagues moved to sue their ex-employer. For two years, however, no lawyer would take them seriously, despite their steadily worsening conditions. In 1928, the suit was finally filed. By this time, the demand for radium was declining, as people woke up to the dangers of radiation. Sales of radium-infused products fell further when newspapers around the world printed details of Grace’s story.

Grace Fryer, the first victim to take the radium industry to court | Photo Credits: Buzzfeed

The Radium Girls weren’t just sick; they were literally radioactive. The body of Mollie Maggia was exhumed in 1927 in the hope that her bones would give the remaining victims the evidence they needed to win their cases. Reportedly, when her coffin was lifted off the ground, her body glowed because of radiation. It wasn’t entirely surprising, considering her bones were found to be highly radioactive.

By the end of 1928, the case had been settled in favor of the female workers. They were awarded some compensation, although it was only a fraction of what they had initially demanded. Their medical bills were covered, and they were able to live out their final days with some measure of dignity. Many more suits followed from workers not just at the United States Radium Corporation but at several companies that had handled radium in the years after its discovery. Workers came to testify on their death beds. While Grace Fryer and her colleagues are remembered for leading the fight against injustice, there were thousands of more workers whose fates varied enormously.

Bedside Hearings of radium girls were common since they were too ill to come to the court. | Photo Credits: RSNA

Though many of the radium girls suffered greatly and died before their time, their deaths were not in vain. Many of these victims volunteered for tests and medical examinations, allowing us to understand for the first time how radiation affects the human body. This persuaded scientists to take extraordinary precautions in later experiments with nuclear weapons, potentially saving thousands of lives.

In addition to this enormous service to later generations, the case pushed forward by the radium girls was the first incident in which an employer was forced to take responsibility for the health and safety of its employees. This was a revolutionary concept in 1928.

The sacrifice and courage of the Radium Girls deserve to be applauded. Their case led to the introduction of life-saving regulations for workers all over the world and the establishment of the Occupational Safety and Health Administration in the United States. The fearless champions continue to shine through history.

Few if any residents in Orange, NJ today know the history of the former plant site. Quietly tucked into a tree-lined residential neighborhood, it has been renamed High & Alden Street Park and features a playground — perhaps a fitting tribute to the young lives lost.

“I think it’s a good idea they [made it a park],’’ resident Robin Laurent, 40, recently told The Post after learning of its past. “It’s good for the people in the neighborhood and the people of Orange.’’

The playground that now stands on the site


References: https://nypost.com/2017/03/22/skin-glowing-from-radium-ghost-girls-died-for-a-greater-cause/

View at Medium.com

https://agnostic.com/post/620923/the-radium-girls-the-dark-story-of-americas-shining-women-in-the-1920s-a-group-of-factory-wor

The Great Wall of China

(Header pic from NASA, allegedly from Space)

Construction of The Great Wall of China doesn’t take a few days or months, “China Long Wall” has a very long and exciting history — more than 2,300 years. It has different sections that were built in various areas of China by different dynasties. The primary motive of its construction was to protect different territorial borders from Mongols and other invaders. Another reason was to make the Silk Road a safe and secure trade route to flourish the economy of the state.

Qin Dynasty and The Great Wall of China:

When we jump into the defense history of China when the land was divided into multiple kingdoms the northern borders were being protected by small walls even before the idea of a grand wall. During the period of Qin Shi Huang who was the first emperor of a unified China and his dynasty was known as the Qin dynasty the idea of a single and strong wall with multiple surveillance booths was presented. The idea got approved and previously built small walls were demolished to create The Great Wall of China. The idea was to construct a strong 10,000 li long wall (a li is about one-third of a mile) with bricks and after small distance lookout towers would be created for guards, these towers were also supposed to give strength to the wall.

General Meng Tian initially directed the project and gathered a labor force. The people who participated in construction were mostly soldiers, the rest of the force consisted of convicts and commoners and rebels. During the construction of the Great Wall of China, many of the workers died due to work overload, weather conditions and lack of food and other survival resources.

Great Wall of China After the Qin Dynasty:

The wall didn`t serve the purpose of its construction well and the internal affairs of the country didn’t allow its people to focus on it much. So after the death of Qin Shi Huang, the Qin Dynasty fell and much of the Great Wall parts too fell into disrepair.

The locals tried to maintain some part of the wall but they couldn’t be that effective and after the downfall of the Han Dynasty, frontier tribes took the control of northern parts of China. Among those tribes, Northern Wei Dynasty was powerful and the need for safety alarmed them once again. Under the supervision of the Wei Dynasty, the wall was repaired as well as extended to ensure the safety of other tribes as well.

Later the Bei Qi Kingdom commanded to repair some part of the Great Wall of China. Their repair activities were for 900 miles. In times of Sui Dynasty repair and extension of the wall took place again and again. It was the last dynasty that gives the Great Wall of China as a fortification value.

When the Tang Dynasty raised The Great Wall lost its importance because China defeated the Tujue tribe to the north and long-drawn-out past the original northern border protected by the wall. Later came the era of the Song Dynasty and once again state security had a threat from external forces. At that time Liao and Jin peoples from the north side were trying to take over both sides of the Great Wall of China and the nearby areas. So once again the wall played a role in controlling the safety concerns not perfectly but too high extent.

In the 1206 Yuan Dynasty which was established by Mongols, mainly Genghis Khan who conquered China and some parts of Asia and Europe. eventually controlled all of China, parts of Asia and sections of Europe. The Great Wall of China became a center to control security issues and once again the wall started to serve its military fortification purpose. This time Mongols used it for the safety of their dynasty.

Soldiers marched through the wall to guard the borders and the caravans traveling to and from Silk Road Trade Routes.

Wall Building During the Ming Dynasty:

Most of the walls that we see today were not originally constructed by the Qin Dynasty. The time, nature and multiple invasions damaged the original construction of the Great Wall of China. The Dynasties coming ruling the land one after another repaired and extended some parts from time to time.

In 1368, Ming Dynasty took control of China and reconstructed the great wall. It was the time when Chinese culture flourished and the trading system became strong. In the starting period of the Ming Dynasty, the border security and construction of the wall was not among the interests of rulers. In 1421, threats from external forces increased and due to trade reasons the capital of China was shifted to Beijing.

The importance of the Great Wall of China highlighted one more time and the Yongle who was the emperor of that order to rebuild the wall. He took great wall reconstruction as the major defensive stance. The new strategy was to not just construct the wall but also provide suitable facilities to on-duty soldiers and their families so that they can settle properly near the wall. So the current long, standing wall was basically constructed in the Ming Dynasty. Major construction activities started in 1474 and the new Great Wall of China also include temples, pagodas, and bridges. Later the wall was extended from the Yalu River in Liaoning Province to the eastern bank of the Taolai River in Gansu Province. It was also winded its way from east to west through today’s Liaoning, Hebei, Tianjin, Beijing, Inner Mongolia, Shanxi, Shaanxi, Ningxia, and Gansu.

Now the west of Juyong Pass of the great wall is split into southern and northern lines respectively named Outer and Inner Walls. Strategic “pass-ways” (i.e., fortresses) and their gates were positioned along the wall. The Juyong, Darma, and Zijing passes are closest to Beijing, were called the Three Inner Passes, while the added side west was Yanmen, Ningwu, and Piantou, the Three Outer Passes.

All these six pass-ways were heavily garrisoned during the Ming Dynasty period and considered vivacious to the defense of the capital.

Mid-17th Century and Great Wall of China:

In the mid-17th century, the Manchus invaded China from central and southern Manchuria and broke through the Great Wall. They encroached on Beijing and the war evoked that eventually forced the fall of the Ming Dynasty. The Manchus established the Qing Dynasty.

The Qing Dynasty didn’t consider the Great Wall of China as a fortification for the security of their borders. But between the 18th and 20th centuries, the site of the Great Wall appeared as an emblem of strength and modern defensive approach of the Chinese nation. It is not just a wall created by emperors; it is now known as a manifest to showcase the strong historical connection and struggle of the Chinese nation. On the other hand, it psychological represents a barrier to deter foreign cultural, physical and other kinds of influences and exert force over its citizens.

Great Wall China Today:

As now China is a socialist democratic state so the look-after and maintenance of the Great Wall of China is the responsibility of the ruling government. Now the wall is considered the most impressive architectural wonder of human history and is also one of the most visited tourist destinations in the world.

In 1987, UNESCO designated the Great Wall of China a World Heritage site. In the 20th century, the state claimed it to be considered as the only man-made structure that can be seen from the moon or space. UNESCO considered the pledge but now the scientists claim that it is not true that the China wall can be seen from the moon. However, in the world maps and satellite pictures people can easily trace the Great Wall of China because of its continuously running miles and miles long.

Over the ages, roadways and small bridges have been cut through the wall or to connect different ways to the wall in various points. Whereas after centuries of negligence many sections have also deteriorated. Approximately 30%+ sections have deteriorated till now.

On the other hand, some sections were reconstructed and some are maintained regularly. In 1950s last major rebuilt was observed at the best-known section of the Great Wall of China. This section is known as Badaling and is located 43 miles (70 km) northwest of Beijing. Every day hundreds of foreign tourists visit this section particularly.

History of Mount Rushmore

Carved into the southeastern face of Mount Rushmore in South Dakota’s Black Hills National Forest are four gigantic sculptures depicting the faces of U.S. Presidents George Washington, Thomas Jefferson, Abraham Lincoln and Theodore Roosevelt.

The 60-foot high faces were shaped from the granite rock face between 1927 and 1941, and represent one of the world’s largest pieces of sculpture, as well as one of America’s most popular tourist attractions. To many Native Americans, however, Mount Rushmore represents a desecration of lands considered sacred by the Lakota Sioux, the original residents of the Black Hills region who were displaced by white settlers and gold miners in the late 19th century.

The Loss of a Sacred Land

In the Treaty of Fort Laramie, signed in 1868 by Sioux tribes and General William T. Sherman, the U.S. government promised the Sioux “undisturbed use and occupation” of territory including the Black Hills, in what is now South Dakota. But the discovery of gold in the region soon led U.S. prospectors to flock there en masse, and the U.S. government began forcing the Sioux to relinquish their claims on the Black Hills.

Warriors like Sitting Bull and Crazy Horse led a concerted Sioux resistance (including the latter’s famous defeat of Gen. George Armstrong Custer in the Battle of the Little Bighorn in 1876), which federal troops eventually crushed in a brutal massacre at Wounded Knee in 1890. Ever since then, Sioux activists have protested the U.S. confiscation of their ancestral lands and demanded their return. The Black Hills (or Paha Sapa in Lakota) are particularly important to them, as the region is central to many Sioux religious traditions.

(Battle of Little Bighorn

The Birth of Mount Rushmore

Mount Rushmore, located just north of what is now Custer State Park in theBlack Hills National Forest, was named for the New York lawyer Charles E. Rushmore, who traveled to the Black Hills in 1885 to inspect mining claims in the region. When Rushmore asked a local man the name of a nearby mountain, he reportedly replied that it never had a name before, but from now on would be known as Rushmore Peak (later Rushmore Mountain or Mount Rushmore).

Seeking to attract tourism to the Black Hills in the early 1920s, South Dakota’s state historian Doane Robinson came up with the idea to sculpt “the Needles” (several giant natural granite pillars) into the shape of historic heroes of the West. He suggested Red Cloud, the Sioux chief who signed the Fort Laramie treaty, as a potential subject.

The Needles, South Dakota

In August 1924, after the original sculptor he contacted was unavailable, Robinson contacted Gutzon Borglum, an American sculptor of Danish descent who was then working on carving an image of the Confederate General Robert E. Lee into the face of Georgia’s Stone Mountain. Robinson had a history of disputes with those who commissioned the Lee project, and they fired Borglum, who left the sculpture unfinished. During his work at Stone Mountain, Borglum associated with members of the newly revived Ku Klux Klan, although it’s unclear whether he actually joined the white supremacist group.

Stone Mountain Carving

Borglum convinced Robinson that the sculpture in South Dakota should depict George Washington and Abraham Lincoln, as that would give it national, and not just local, significance. He would later add Thomas Jefferson and Theodore Roosevelt to the list, in recognition of their contributions to the birth of democracy and the growth of the United States.

Sculpting the Presidents at Mount Rushmore

During a second visit to the Black Hills in August 1925, Borglum identified Mount Rushmore as the desired site of the sculpture. Local Native Americans and environmentalists voiced their opposition to the project, deeming it a desecration of Sioux heritage as well as the natural landscape. But Robinson worked tirelessly to raise funding for the sculpture, aided by Rapid City Mayor John Boland and Senator Peter Norbeck, among others. After President Calvin Coolidge traveled to the Black Hills for his summer vacation, the sculptor convinced the president to deliver an official dedication speech at Mount Rushmore on August 10, 1927; carving began that October.

In 1929, during the last days of his presidency, Coolidge signed legislation appropriating $250,000 in federal funds for the Rushmore project and creating the Mount Rushmore National Memorial Commission to oversee its completion. Boland was made the president of the commission’s executive committee, though Robinson (to his immense disappointment) was excluded.

To carve the four presidential heads into the face of Mount Rushmore, Borglum utilized new methods involving dynamite and pneumatic hammers to blast through a large amount of rock quickly, in addition to the more traditional tools of drills and chisels. Some 400 workers removed around 450,000 tons of rock from Mount Rushmore, which still remains in a heap near the base of the mountain. Though it was arduous and dangerous work, no lives were lost during the completion of the carved heads.

Mount Rushmore Depictions

On July 4, 1930, a dedication ceremony was held for the head of Washington. After workers found the stone in the original site to be too weak, they moved Jefferson’s head from the right of Washington’s to the left; the head was dedicated in August 1936, in a ceremony attended by President Franklin D. Roosevelt.

In September 1937, Lincoln’s head was dedicated, while the fourth and final head–that of FDR’s fifth cousin, Theodore Roosevelt–was dedicated in July 1939. Gutzon Borglum died in March 1941, and it was left to his son Lincoln to complete the final details of Mount Rushmore in time for its dedication ceremony on October 31 of that year.

Mount Rushmore National Memorial, sometimes called the “Shrine of Democracy,” has become one of the most iconic images of America and an international tourist attraction. In 1959, it gained even more attention as the site of a climactic chase scene in Alfred Hitchcock’s film “North by Northwest.” (In fact, South Dakota did not allow filming on Mount Rushmore itself, and Hitchcock had a large-scale model of the mountain built in a Hollywood studio.)

In 1991, Mount Rushmore celebrated its 50th anniversary after undergoing a $40 million restoration project. The National Park Service, which maintains Mount Rushmore, records upwards of 2 million visitors every year. Meanwhile, many Sioux activists have called for the monument to be taken down, even as they continue to protest what they view as illegal U.S. possession of their ancestral lands.

Crazy Horse Memorial

Another sculpture was also carved in the Black Hills – that of Crazy Horse.

A Lakota Sioux warrior, a famed artist, his family and a canvas composed of granite are the elements that comprise the legendary past, present and future of the Crazy Horse Memorial.

Sculptor Korczak Ziolkowski began the world’s largest mountain carving in 1948. Members of his family and their supporters are continuing his artistic intent to create a massive statue that will be 641 feet long and 563 feet high. To give that some perspective, the heads at Mount Rushmore National Memorial are each 60 feet high. Workers completed the carved 87½-foot-tall Crazy Horse face in 1998, and have since focused on thinning the remaining mountain to form the 219-foot-high horse’s head.

Crazy Horse Memorial hosts between 1 and 1½ million visitors a year. The number of foreign travelers, particularly group tours from Asia, is increasing.

The Indian Museum of North America, and the adjoining Welcome Center and Native American Educational and Cultural Center, feature more than 12,000 contemporary and historic items, from pre-Colombian to contemporary times. The new Mountain Museum wing helps explain the work behind the scenes, augmenting the introductory “Dynamite & Dreams” movie at the Welcome Center.

Crazy Horse Memorial is open every day, from 8 a.m. to dark during the summer season. Memorial Day weekend through the end of September, the storytelling continues each night at dark with the “Legends in Light” laser-light show projected on the mountain carving.

The Northern Lights

The northern lights, or the aurora borealis, are beautiful dancing waves of light that have captivated people for millennia. But for all its beauty, this spectacular light show is a rather violent event.

Energized particles from the sun slam into Earth’s upper atmosphere at speeds of up to 45 million mph (72 million kph), but our planet’s magnetic field protects us from the onslaught.

As Earth’s magnetic field redirects the particles toward the poles — there are southern lights, too — the dramatic process transforms into a cinematic atmospheric phenomenon that dazzles and fascinates scientists and skywatchers alike.

Though it was Italian astronomer Galileo Galilei who coined the name “aurora borealis” in 1619 — after the Roman goddess of dawn, Aurora, and the Greek god of the north wind, Boreas — the earliest suspected record of the northern lights is in a 30,000 year old cave painting in France.

Since that time, civilizations around the world have marveled at the celestial phenomenon, ascribing all sorts of origin myths to the dancing lights. One North American Inuit legend suggests that the northern lights are spirits playing ball with a walrus head, while the Vikings thought the phenomenon was light reflecting off the armor of the Valkyrie, the supernatural maidens who brought warriors into the afterlife.

The oldest known auroral citing was written in 2600 B.C. in China: “Fu-Pao, the mother of the Yellow Empire Shuan-Yuan, saw strong lightning moving around the star Su, which belongs to the constellation of Bei-Dou, and the light illuminated the whole area.” Thousands of years later, in 1570 A.D., a drawing of the aurora depicted candles burning above the clouds.

Across the north, where the phenomena is most widely seen, there are legends and beliefs about the Northern Lights that defy reason, but go back generations. In Canada’s Northwest Territory, a tribal elder explained that, as a youngster, he recalled stories told by his grandfather that, if you listen closely, you can hear the Northern Lights. They stepped outside and heard a swishing sound, almost crackling.

Stories were told that one could “whistle” in the Northern Lights and some said you could inhale the Northern Lights and they would kill you. Still others claimed they were the spirits of children who were stillborn. The crackling sound has been claimed to be the spirits trying to communicate with you; alternatively, other legends say it is the Inuits playing a game of kicking a walrus skull around, and the crackling sound is the crunching of the snow.

Inuit Tribe Members

Sharon Shorty, a Yukon story-teller and comedian, of Tlingit, Northern Tuchone and Norwegian background, remembers her childhood walking around Teslin, Yukon with her grandmother, Carrie Jackson.

Sharon Shorty

“I could see all the ribbons in the air and Grandma would tell me, “Shhhh! Don’t look, don’t look! Bad luck, no good!” I asked, “Why can’t I look?” and she said it’s bad luck, that they are spirits. So when we’re looking at them, they are spirits – people who have passed on in a bad or hard way. That could mean a suicide or a murder or something in a bad way. This is what Tlingit people believe, and I think other nations believe that as well.”

Over the Takhini Valley in Yukon

“To me, it looks like people holding hands and it is our ancestors. They died in a bad way, are lonely, and want company. They want to take somebody from earth to be with them and they could come down and take you if you look at them or get their attention. That’s why we say never whistle at them – you’re not supposed to draw their attention because they will find you.”

Early astronomers also mentioned the northern lights in their records. A royal astronomer under Babylon’s King Nebuchadnezzar II inscribed his report of the phenomenon on a tablet dated to 567 B.C., for example, while a Chinese report from 193 B.C. also notes the aurora, according to NASA.

The science behind the northern lights wasn’t theorized until the turn of the 20th century. Norwegian scientist Kristian Birkeland proposed that electrons emitted from sunspots produced the atmospheric lights after being guided toward the poles by Earth’s magnetic field. The theory would eventually prove correct, but not until long after Birkeland’s 1917 death.

A lime-green aurora glows above Earth’s city lights in this view from the International Space Station. At the time this photo was taken, the space station was orbiting about 258 miles (415 kilometers) above Russia and the Ukraine. A portion of the space station’s solar array is visible in the top left corner of the image. (Image credit: NASA)

The bright colors of the northern lights are dictated by the chemical composition of Earth’s atmosphere.

“Every type of atom or molecule, whether it’s atomic hydrogen or a molecule like carbon dioxide, absorbs and radiates its own unique set of colors, which is analogous to how every human being has a unique set of fingerprints,” Teets told Space.com. “Some of the dominant colors seen in aurorae are red, a hue produced by the nitrogen molecules, and green, which is produced by oxygen molecules.”

While solar wind is constant, the sun’s emissions go through a roughly 11-year cycle of activity. Sometimes there’s a lull, but other times, there are vast storms that bombard Earth with extreme amounts of energy. This is when the northern lights are at their brightest and most frequent. The last solar maximum, or period of peak activity, occurred in 2014, according to the U.S. National Oceanic and Atmospheric Administration (NOAA), placing the next one in approximately 2025.

Despite plenty of advances in heliophysics and atmospheric science, much about the northern lights remains a mystery. For example, researchers weren’t entirely sure how the energized particles in the solar wind get accelerated to their extraordinary speeds (45 million mph) until June 2021, when a study published in the journal Nature Communications confirmed that a phenomenon called Alfvén waves gave the particles a boost. Alfvén waves are low-frequency yet powerful undulations that occur in plasma due to electromagnetic forces; the electrons that create the northern lights “surf” along these waves in Earth’s atmosphere, accelerating rapidly.

The auroras are best seen during the winter, when nights are long. Hours of patience by photographer Daniele Boffelli resulted in this image that captures both clouds and auroras in the night sky. (Image credit: Daniele Boffelli)

NASA is also on the hunt for clues about how the northern lights work. In 2018, the space agency launched the Parker Solar Probe, which is currently orbiting the sun and will eventually get close enough to “touch” the corona. While there, the spacecraft will collect information that could reveal more about the northern lights.

On Earth, the northern lights’ counterpart in the Southern Hemisphere is the southern lights — they are physically the same and differ only in their location. As such, scientists expect them to occur simultaneously during a solar storm, but sometimes the onset of one lags behind the other.

The Southern Lights over Australia (Auror Australis)

“One of the more challenging aspects of nightside aurorae involves the comparison of the aurora borealis with the aurora australis,” said Steven Petrinec, a physicist at the aerospace company Lockheed Martin who specializes in magnetospheric and heliospheric physics.

“While some auroral emissions occur in both hemispheres at the same magnetic local time, other emissions appear in opposing sectors in the two hemispheres at different times — for example, pre-midnight in the Northern Hemisphere and post-midnight in the Southern Hemisphere,” Petrinec told Space.com.

The hemispheric asymmetry of the aurora is due in part to the sun’s magnetic field interfering with Earth’s magnetic field, but research into the phenomenon is ongoing.

Another aurora-like occurrence on Earth is STEVE (“Strong Thermal Emission Velocity Enhancement”). Like the northern and southern lights, STEVE is a glowing atmospheric phenomenon, but it looks slightly different from its undulating auroral counterparts. “These emissions appear as a narrow and distinct arc, are typically purple in color and often include a green picket-fence structure that slowly moves westward,” Petrinec said.

STEVE is also visible from lower latitudes, closer to the equator, than the auroras.

A 2019 study published in the journal Geophysical Research Letters discovered that STEVE is the result of two mechanisms: The mauve streaks are caused by the heating of charged particles in the upper atmosphere, while the picket-fence structure results from electrons falling into the atmosphere. The latter process is the same driver of the aurora, making STEVE a special kind of aurora hybrid.

More info: https://www.space.com/15139-northern-lights-auroras-earth-facts-sdcmp.html

History of Sugar Beet Production in Nebraska

Nebraska’s Panhandle is in the far western portion of the state, and plays a major role in the state’s agricultural economy. One of the specialty crops with the greatest significance to the state economy is sugar beets, and it is unique to western Nebraska and concentrated in the Panhandle. Approximately 90% of the sugar beets grown in Nebraska are produced in the Panhandle. Most production occurs in Scotts Bluff, Morrill, and Box Butte counties, but acreage is increasing in Sheridan, Banner, Kimball, Cheyenne, Chase, Keith, and Perkins counties.

Harvest Begins in the Panhandle

Nebraska currently ranks 6th in the U.S. in production and generally ranges between 45,000-60,000 acres planted per year, with a high of 80,000 in 2000. Sugar beets contribute economically through both the production and processing industries and are estimated to contribute more than $130,000,000 to the local economy through payrolls, property taxes, and other impacts.

Geography and Climate

Sugar beets have been successfully produced in Nebraska for nearly 100 years. This is due, in part, to a number of environmental factors characteristic of the western part of the state. Sugar beets need long days (approximately 140 growing days) with sunshine and abundant moisture during the season.

This region typically produces an average of 135-160 clear days per year, which is ideal for sugar beets. The elevation ranges from 3,000-5,000 ft and the resulting hot days and cool nights provide excellent conditions for development and storage of sucrose in the tap roots. The Panhandle additionally has a high desert-type semi-arid climate receiving 14-16 inches rainfall per year. This provides an additional advantage for western Nebraska – an arid climate that helps reduce incidence and severity of several important foliar disease problems that traditionally plague the Minnesota-North Dakota, and Michigan producers, such as Cercospora leaf spot. Although levels of required moisture in Nebraska are generally deficient for proper plant growth, this problem is solved by supplementation with irrigation.

Irrigation

The Panhandle grows about 700,000 acres of irrigated crops, including all sugar beets produced in Nebraska. Irrigation in Nebraska began in the early 1890’s in Scotts Bluff County to augment alfalfa hay production for livestock for winter feeding. This led to development of irrigation in other introduced crops like sugar beets, and later dry beans. Early efforts were small, furnished by local capital. After the Reclamation Service was established, dams were built across the North Platte River in Wyoming, thus paving the way for a complex series of canals to be built across Scotts Bluff County in western Nebraska.

Furrow System Irrigation

These canals became the lifeblood of this region and enabled producers to irrigate beets through furrow systems, which still predominate in the North Platte Valley of Scotts Bluff and Morrill Counties. With the introduction of the center pivot irrigation systems and the vast quantities of water available from the Ogallala aquifer, the acreage has been able to spread beyond the Valley to the tablelands north and south of the North Platte River. Approximately two-thirds of the production is now irrigated by center pivots.

Center Pivot Irrigation

Sugar Beets Begin in Nebraska

Sugar production in Nebraska began in 1890, with the first factory being established in Grand Island. This was also the second factory to operate successfully in the United States (after the first in Albany, CA). Weather related problems yielded low crop levels and farmers became discouraged with continuation of sugar beet production. The state of Nebraska then offered a bounty of one cent per pound on sugar produced in Nebraska to encourage the industry to expand. Other factories were then built in Norfolk and Ames in 1891 and 1899, respectively.

The farmers in the Norfolk area eventually discovered that they received better returns raising corn and livestock than sugar beets, and the factory was closed in 1905. The factory in Ames was built by the Standard Beet Company, which soon recognized the potential for the sugar beet industry in the North Platte Valley.

By 1900, enough sugar beets had been raised in the Panhandle to convince farmers that their land was suitable for this crop. It was then determined that a more extensive means of irrigation was necessary than that used by early homesteaders if the crop was going to significantly expand. Thus, the Tri-State Land Company was founded and set out to develop irrigation throughout the Valley.

In 1909, land was acquired, a factory site was secured, and Great Western Sugar Company bought the factory previously located in Ames, Nebraska and moved it to Scottsbluff. For the 1910 season, twelve thousand acres were contracted at $5.00 a ton and the factory was completed in time for the fall crop, beginning the foundation for sugar beets becoming the great agricultural industry it is today.

Delivering Sugar Beets in Scottsbluff

Social and Economic Influences

By 1904-1905, contract acres from the Standard Beet Sugar Company for sugar beets approached 300. However, many of the farmers in Scotts Bluff County were unfamiliar with the process of sugar beet production. In order to fulfill high potential for this area, efforts were made to recruit help from more experienced growers.

It was at this time that the German-Russians (referred to as “beeters”) from Lincoln and Omaha were enticed to come to the North Platte Valley. These workers first came to the Panhandle seasonally in spring at planting, and returning in the fall after harvest. Their experience with sugar beets dated back to their time in Europe, and was also essential in the initial efforts to produce beets in eastern Nebraska.

By 1924, two-thirds of the sugar beet workers in Scotts Bluff County were German-Russians. As they became more Americanized, they began to find other jobs that would sustain them for the entire year, and their numbers decreased in sugar beet fields. Many also settled in the Scottsbluff area and became landowners after thriftily saving their money to buy land. Today much of the land in Scotts Bluff County is owned by second and third generation descendants of the original German-Russian “beeters.”

In 1905 the acreage planted was less than 300, with yields of seven tons per acre. By the late 1920’s – early 1930’s, farmers in the Valley were growing up to 80,000 acres with yields of 12 tons per acre. As the volume of sugar beet production increased over the years, it became apparent that greater processing capacity was needed. This led to the construction of more factories, including those at Gering (1916), Bayard (1917), Mitchell (1920), Minatare (1926), and Lyman (1927). There were additionally beet dumps established in McGrew and Melbeta near the railroads for easy transportation to Scottsbluff or Minatare for processing, similar to the process in place today. In fact, the name chosen for the town “Melbeta” translates as “sweet beet” in German.

Summary

Irrigation was the major factor in the establishment of the sugar beet industry in western Nebraska. Secondly, the railroads were likewise very influential as they provided transportation to move the beets from field to factory and then to market. They also allowed coal to be brought into the area to furnish fuel for operating the factories. In turn, sugar beets were directly responsible for the immigration and settling of descendants for many of the county’s current residents. Thus, sugar beets were not only responsible for the ethnic makeup of western Nebraska, but also for the economic development and improvement of the area.

Sugar Beet Field

The cultivation of sugar beets enabled the development of roads, expansion of railroads, improvement of schools, and growth and spatial arrangement of cities and farms in Scotts Bluff County. The number of farms increased from 421 in 1900 to 1391 in 1920. The number of people per square mile in the county increased from 2.6 in 1880 to 28.8 in 1920. The increase in population between 1910 and 1920 was more than 147%, one of the highest percentages of population gains in the United States.

Digging Sugar Beets

Therefore, the development of the sugar beet industry could arguably be the single most important and influential factor in the county, from the building of the sugar factory in 1910 to the present, and has defined Scotts Bluff County as it is known today.

THE ELECTORAL COLLEGE (EC)

(Compiled from the writings of Dr. Bob Livingston)

There is much interest by the left-wing in the elimination of the EC; indeed, three attempts have been made through the years, with the first serious challenge in 1969/70 by Birch Bahy, a young Democrat Senator from Indiana, who was also responsible for the introduction of the 25th Amendment after the assassination of JFK. He failed to overturn the EC, thankfully. Of course, the leftists claimed this was due to racism.

Bayh reintroduced an Electoral College amendment five more times over the next decade. The only time it came up for a full Senate vote was in 1979, when it got 51 votes, well shy of the 67-vote supermajority needed to pass a constitutional amendment.

https://www.history.com/news/electoral-college-nearly-abolished-thurmond

There is also currently a pact among 13 states and DC called the “National Popular Vote Interstate Compact (NPVIC)” that bears watching.

“The National Popular Vote Interstate Compact (NPVIC) is an agreement among a group of U.S. States and the District of Columbia to award all their electoral votes to whichever presidential candidate wins the overall popular vote in the 50 states and the District of Columbia. The compact is designed to ensure that the candidate who receives the most votes nationwide is elected president, and it would come into effect only when it would guarantee that outcome.[2][3] As of November 2020, it has been adopted by fifteen states and the District of Columbia. These states have 196 electoral votes, which is 36% of the Electoral College and 73% of the 270 votes needed to give the compact legal force.

https://en.wikipedia.org/wiki/National_Popular_Vote_Interstate_Compact

The Founding Fathers feared two things above all else: a democracy and an overly powerful executive. The Electoral College was designed to prevent both. For a good, very detailed history of the Electoral College and its evolution over the years, read “Origins of the Electoral College,” by Randall G. Holcombe.

Link: https://mises.org/library/origins-electoral-college

Without the Electoral College a presidential candidate would merely have to win a big majority of the population in the five or six of the most populace states to carry the election – say California, New York, Texas, Ohio, Pennsylvania and Florida. As it is, Democrats hold a decisive advantage in the Electoral College math because they advocate for socialism. Minorities love socialism, and minorities tend to congregate in the urban areas. As a result, Democrats can win only about 20-30 percent of the counties of the America and still win the election, even with the Electoral College.

Look at the map below. It shows the counties won by Hillary Clinton (blue) and Donald Trump (red). Had Clinton been elected a vast swatch of Middle America – geographically speaking — would have been on the losing end of the election.

The globalist elites want to erase all borders in their move toward one world government, and the globalist elites love democracy as much as minorities love it. That’s because the principle of government is that political power is maximized by forcibly leveling every individual to the same status of conformity, collectivism, ecumenicalism and serfdom.

But it must be done in such a way that the people who are being reduced never see it coming. That’s why it’s done with gradualism and by “the vote.” If we aren’t careful, we will vote for our own slavery.

References:

b. https://personalliberty.com/anti-trump-crowd-clamors-slavery/a. https://personalliberty.com/need-electoral-college/