I become a figure of dread, briefly

Last night I was accidentally part of a strange little scene.

Before going to bed I went down to this hotel’s business center to print out a few pages for tomorrow’s workshop.  I fired up the computer, got online, grabbed the document I needed, and set to.

You have to know that the “business center” is actually a nook near the hotel lobby.  It consists of one computer and one printer on a small table set behind a staircase, adjacent to a big, wall-mounted tv screen.  The area is not very well lit.  It is hard to see from the front desk, just twenty feet away.

So I was fiddling with Chrome, Word, and font sizes.  After a minute a hotel staffer came around the staircase corner and sees me. Immediately she gasped, clutched her chest, and actually tilted over a little, leaning on the corner for support.

Concerned, I immediately apologized and asked if she was ok. Her reply came back in a breathless near-stutter: “I thought you were … the shadow driver!”

“What?”

“The shadow driver!”

Well, it turned out that I misheard (I was very sleepy) and she wasn’t speaking clearly (she was startled). It emerged that what she meant to say was “the shuttle driver.” That person was supposed to be somewhere else at the moment, or not there at all.

But I’d rather be the shadow driver any day.

Posted in personal | 9 Comments

New demographic research for educators and futurists

How populations change over time is a crucial area of concern for both educators and futurists.  This week some important new research appeared from The Lancet in their new Global Burden of Disease Study.  Teams dove into demographic and health changes over the past 70 years, assembling truly impressive amounts of transnational data.

I’ll summarize some of the key points here, and would love to hear your thoughts.

Human life expectancy has really soared in just a few generations:

Globally, life expectancy at birth has increased from 48·1 years… in 1950 to 70·5 years (70·1–70·8) in 2017 for men and from 52·9 years…in 1950 to 75·6 years… in 2017 for women.

world life expectancy_Lancet -females

Life expectancy for women worldwide.

Some of that growth is quite recent, taking place over the past 30 years. “Globally, from 1990 to 2017, life expectancy at birth increased by 7·4 years… from 65·6 years… in 1990 to 73·0 years… in 2017.”

A key driver of this growth is the “remarkable… decline in under-5 mortality since 1950… at the global level.”

Yet there are some local variations, such as America’s “deaths from despair” and “the obesity epidemic, the opioid crisis, or the rise of drug-related violence in some locations.” Researchers offer this useful and sobering caution:

Because of the celebrated progress in many locations, many people have come to expect age-specific death rates to always decline; however, there is nothing inevitable about the trajectory of death rates, particularly in adults.

One side effect of this terrific rise in life expectancy is rising amounts of disability.

If you want to think about what’s most likely to kill you, it isn’t violence or infectious disease.  No, the world’s leading killers are, by far, “non-communicable diseases [which] contribut[ed] to 73·4%… of total deaths in 2017.”  Similarly, what’s most likely to take a toll on your lifespan? “[N]eonatal disorders, ischaemic heart disease, stroke, lower respiratory infections, and chronic obstructive pulmonary disease.” (Media note: let’s see how that plays on violence and fear-obsessed American tv “news”, hm?)

Human population growth is slowing down, as “global population growth rates have declined from a peak of 2·0% in 1964 to 1·1% in 2017.” In many nations populations have plateaued. In several, population is starting to shrink.  Most of the nations experiencing active growth are in Africa:

world population growth _Lancet

Overall, we’re still producing net more humans, just at a slower rate.

One major driver for this is a massive drop in how many children women have. “From 1950 to 2017, TFRs [total fertility rates] decreased by 49·4%.”

Again, as I and others keep saying:

In high-income countries, the proportion of the population that is of working age has also decreased in the past 5 years, and this trend is likely to continue for the foreseeable future. This demographic shift toward an older population has a broad range of consequences, from reductions in economic growth, decreasing tax revenue, greater use of social security with fewer contributors, and increasing health-care and other demands prompted by an ageing population.

Meanwhile, the reports offer a glimpse of rising health care issues:

The risks with the highest increases in SEVs [summary exposure value]* globally include high BMI, ambient particulate matter pollution, and high FPG; the risks with the largest decreases in exposure are unsafe sanitation, diet high in trans fatty acids, and household air pollution.

Watch for more public health movements along those lines.

One historical note: thanks to the timeline of these studies (1950 on) the biggest single disaster in that period is one that receives not much attention.  Again and again researchers in these articles single out China’s Great Leap Forward as a massive catastrophe, “[t]he most notable fatal discontinuity in the past 68 years.”  For example,

The huge impact of the Great Leap Forward in China in 1960 is shown clearly at both the global and regional level. Globally, life expectancy dropped by 5·1 years (3·9–6·2) as a result of the famine.

Equally as astonishing is how China bounced back:

Despite the massive setback around the famine in 1960, China has made steady progress and, in 2017, life expectancy was 74·5 years (95% UI 74·1 to 75·0) for men and 79·9 years (79·4 to 80·4) for women.

Remember, as Branko Milanovic observes, people outside of China often don’t pay enough attention to that great nation.

Where does all of this data take us?  The giant pile of research also has a forecasting section.  One key futuring conclusion is that lifespans may well keep increasing:

We forecasted global life expectancy to increase by 4·4 years (95% UI 2·2 to 6·4) for men and 4·4 years (2·1 to 6·4) for women by 2040, but based on better and worse health scenarios, trajectories could range from a gain of 7·8 years (5·9 to 9·8) to a non-significant loss of 0·4 years (–2·8 to 2·2) for men, and an increase of 7·2 years (5·3 to 9·1) to essentially no change (0·1 years [–2·7 to 2·5]) for women.

What should educators take away from this?  Let me offer some points.

  • Once again, most of our populations are aging. This has plenty of ramifications.
  • Higher education institutions seeking to boost traditional-age enrollment would do well to recruit from Africa, the Middle East, Pakistan, and parts of Southeast Asia.  Perhaps this is a good time to partner up with local secondary education as well.  This may well take significant resources.
  • Campuses will have to expand disability accommodations, both in person and online.  This will be even more needed as we teacher older folks.
  • For people who are still worried about humanity growing too large, are you really talking about Africa and parts of Asia?  There might be some ugly politics there.
  • Changing health issues should help feed increasing demand for academic medical studies and career preparation.  (Another datapoint for the Health Care Nation scenario)
  • Stresses around population changes, both individual and social, may well spur rising interest in and acceptance of suicide, as I’ve noted previously.
  • Please, please, please teach media criticism so that students don’t get distracted by bad reporting of actual, real-world disease, violence, and death.

*SEVs are “a summary measure of exposure… The metric is a risk-weighted prevalence of an exposure, and it offers an easily comparable single-number summary of exposure to each risk. SEVs range from 0% to 100%, where 0% reflects no risk exposure in a population and 100% indicates that an entire population is exposed to the maximum possible level for that risk.”

Posted in demographics | 3 Comments

When the guns stopped, but the future kept happening

At 11 am on the 11th day of 1918’s 11th month, all of the guns fell silent.

They had been firing for more than four years without pause. From August 1914 Western Europe had been mutilated and shattered by continuous industrial warfare. Millions had died in what seemed like a science fiction war, with new and horrifying weapons (poison gas, submarines, flamethrowers, tanks, aircraft) ravaging lands that had just recently congratulated themselves on being the acme of human civilization. A generation was gutted in mud and futility.

And then it all stopped, at a single minute, from the North Sea to Switzerland.

A desperate German embassy, crossing over the war’s hellscape, had met with victorious French and British (but not American nor Italian) leaders in a railway car parked in the woods around Compiègne.   The German representatives, low-ranking enough to be insulting by their very selection, had sought a cease-fire.  But faced with fierce Allied demands, escalating military defeat, previously unthinkable mutinies, and then the sudden implosion of their Reich, they were forced to agree to much more.  (There had been a false peace alarm four days earlier: a touch of fake news.)

For the tens of millions of soldiers fighting in and east of terrifying trenches, the onset of peace was something miraculous.

At that moment, the Times correspondent Edwin L. James wrote from the front, “four years’ killing and massacre stopped, as if God had swept His omnipotent finger across the scene of world carnage and cried, ‘Enough!’”

US troops cheer WWI end

American troops celebrate

Thomas Hardy responded in verse:

Breathless they paused. Out there men raised their glance
To where had stood those poplars lank and lopped,
As they had raised it through the four years’ dance
Of Death in the now familiar flats of France;
And murmured, ‘Strange, this! How? All firing stopped?’
Aye; all was hushed.

Miraculous could also mean unsettling, weird: “a thick white mist over the whole district, which hid everything over a distance of twenty yards from you”:

In one way this dense white shroud, though not in keeping with the joyfulness of the occasion, agreed with what was by far the most striking feature about the cessation of hostilities—uncanny silence. After what I have known of the front for the last four years or more, it seems incredible to be standing here with all the paraphernalia of war lying about, and the air to be absolutely still, and the silence unbroken by a single shot.

WWI endYou can even listen to it.

It’s an astonishing moment.  It fills our imagination.  I heard Kurt Vonnegut speak to it – twice.  The symbolism amazes.

And yet.  Such periods are rarely what they seem.  They simplify.  They attract our attention, drawing it away from other stories.  Clean breaks are a known problem for historiography; they should also be a caution for futurists.

How can I say this?  Because at 11 am on 11/11/18 the Great War’s bloodshed and and the forces it unleashed didn’t end.  The violence and political turmoil surged on, even on November 11th itself.

For example, the East African front saw fighting for two more weeks, as von Lettow-Vorbeck continued his guerrilla campaign against the British empire.

In the former Russian empire civil war raged in the wake of the Bolshevik revolution. Fighting would include not only battles between Reds and anti-government Whites, but an anarchist uprising, a revolt in Petrograd itself, and  a Soviet invasion of Poland aimed at Berlin.  The victorious Allies actually invaded the USSR, preferring in the end to call the doomed failure an intervention instead. In fact, on November 11th itself, British, Canadian, and American troops invading the USSR fought a small battle with Bolsheviks.

Elsewhere in eastern Europe Latvia would successfully fight for its independence, a war which ended in 1920. Hungary would experience a short-lived Communist government in 1919, as would part of the new state of Czechoslovakia.  Finland, an independent nation for the very first time, still bled from its horrible 1918 civil war.

Post-Ottoman Turkey would revolt against an Allied-imposed treaty (“signed on 10 August 1920, in an exhibition room at the Manufacture nationale de Sèvres porcelain factory”) and go on to successfully fight Armenia, France, Britain, and especially Greece through 1922, culminating in the forced resettlement of 1.6 million people and the creation of today’s Turkish Republic.

Elsewhere in former Ottoman lands now “administered” by the Allies, Egyptians would revolt against British occupation in 1919, as would Iraqis in 1920.

After signing the November 11th armistice, German civilians would continue to suffer the Allied naval blockade for another half year. Germany’s new republic would be wracked by civil disturbances for years. Fighting broke out around Berlin in December. A Spartacist uprising occurred in January 1919, along with a Bremen Soviet. Bavaria formed a Soviet Republic in April 1919. Freebooting militias, Freikorps, would fight left-wing movements and assassinate people, like Rosa Luxemburg and Karl Liebknecht in January 1919. Ruhr workers revolted in 1920. Poles and Silesians would rebel against Wiemar for three years.

Spain would go through civil unrest and revolts (“Three Bolshevik Years“) culminating in a military dictatorship in 1923.

Italy would experience escalating unrest from left and right. In 1919-1920 Gabriele d’Annunzio led WWI veterans in seizing the city of Fiume, which has just been assigned to the new Yugoslav Republic. Italy would experience the first anti-fascist local revolt in 1921 then the imposition of Europe’s first fascist national state in 1922.

Ireland, in the wake of the war’s Easter Rising, would wage a war of independence from Britain (1919-1921), then fight a bitter civil war in 1922-23. During Cogadh na Saoirse Limerick created its own Soviet in 1919.

The United States, a power that arrived very late to the war, was already undergoing its first red scare in November 1918.  Race riots in 1919 – i.e., white people attacking blacks, including veterans – would build into the Red Summer.  The Battle of Blair Mountain between capital and insurgent labor would occur in 1921.

Meanwhile, “the Spanish flu” – also known as the Great Influenza continued to kill millions more.  That ultimately took 50 to 100 million lives, on top of World War I’s butcher’s bill of 15-19 million.

So at the very least we can track these WWI continuations and echoes for five more horrendous years.  Indeed, Robert Gerwarth argues that we should date the war’s end as 1923.

Why, then, are our eyes drawn to 11/11/18?  I think a major reason is the long-standing* divide between eastern and western Europe.  People in the latter usually avoid paying attention to the former.  This is especially true in the United States, where “Europe” usually means “Britain and France and maybe Germany.”  The former Ottoman lands are even more neglected.  The armistice we celebrate today was only for one of WWI’s fronts.

A great deal of the violence follows from Allied mistakes and disasters, from the stupid invasion of the Soviet Union to the failed attempts to manage Turkey.  The armistice lets us avoid all of that unpleasantness in favor of celebrating a clear victory instead.

Moreover, much of the chaos stems from left-wing revolution, both from Soviet-inspired movements and their rising right-wing opposition, which rapidly develops into fascism.  Seeing 11/11 as a clean break neatly sidesteps the left-wing political challenge, while avoiding acknowledging the right’s rise even at the moment of Allied victory.

Finally, the settlement of WWI famously (or notoriously) led straight to the even greater disaster of WWII.  Focusing on November 11th, 1918, lets our attention pause without looking ahead.

Please don’t interpret this post as an argument for not recognizing the armistice.  Quite the opposite.  As many of my readers and friends know, I’m keen to increase our awareness of World War I.  I also recognize the date as a way to commemorate the unspeakable suffering that war entailed.

But I want us to think more carefully about that date.  We need to be careful about such clear punctuations and understand their limitations.  Especially when we look ahead in forecasting, we have to keep our eyes open for all stories, not just the ones with such astonishing endings.

*I date the split to the 11th century.

(thanks to lharmon for the audio link)

Posted in politics, technology | 9 Comments

Our Twitter and Teargas book club reading schedule

Our online bookclub is reading Zeynep Tufekci’s Twitter and Tear Gas: The Power and Fragility of Networked Protest.  Here’s the plan.

I’ll blog about sections of the book starting November 19th, every Monday for the next five weeks.  These posts will have a short summary of the reading, my observations, and some questions.  The posts will also have summaries of and links to other commentary as we go.

Based on my reading of the material, I picked chunks of the book that run between around 40 and 60 pages.  That’s doable for a week’s read, while leaving time for folks to reflect and comment.

The schedule runs as follows:

November 19, 2018: Preface, Introduction, and chapter 1, “A Networked Public”.

November 26: Chapters 2: “Censorship and Attention” and 3: “Leading the Leaderless”.

December 3: Chapters 4: “Movement Cultures” and 5: “Technology and People”.

December 10: Chapters 6: “Platforms and Algorithms” and 7: “Names and Connections”.

December 17: Chapters 8: “Signaling Power and Signaling to Power” and 9: “Governments Strike Back”.

All blog posts and their associated comments will be tagged https://bryanalexander.org/tag/tufekci/, and so will be available in that one spot for any reader now and in the future.

How do you participate?  Pretty much however you like:

  • reading on your own and following these posts
  • adding comments to each blog post
  • sharing thoughts on Twitter (please tweet @ me to get your tweets into the next blog post)
  • writing more thoughts on your own site, or on Facebook, LinkedIn, Google+, Instagram, or wherever you like.  (Let me know if you’d like me to link to and/or summarize your reactions on my next post)
  • annotate relevant web content with Hypothes.is (again, let me know)
  • creating audio or video commentary.  Readers who make podcasts or videos can certainly tune their media to our discursive channel.  One Future Trends Forum guest took time in our conversation to reflect on the then-current reading.
  • making stuff.  In previous readings people have created a variety of responses, like an inspired dialog, a social network analysis of readers and reading, even a quote-generating Twitter bot.
  • …and any other way you’d like.  Let us know what you decide!  This is an open and welcoming reading.

Also, I will reach out to professor Tufekci to invite her to join us however she can: comments here, Twitter notes, or video.

There is a good amount of information about the book online.  It has its own Wikipedia page.  You can also find it on Goodreads and its official site.  Grab a library copy, get your own, and dig in.

Happy reading!

 

Posted in book club | Tagged | Leave a comment

One university to close, another lays off faculty, and two more will merge

How is American higher education faring in an era of sustained challenges and possible decline? This morning Inside Higher Ed offered three (3) stories about campuses taking drastic steps.

First, Iowa Wesleyan University’s president floated the prospect of closing that institution.  According to Steven Titus, IWU is simply running out of money.

The university does not have a healthy endowment or extensive donor network. We have attempted to secure funding to establish a solid financial base. Unfortunately, several anticipated gifts simply have not materialized. At this moment, the university does not have the required financial underpinnings to bridge the gap between strong enrollment and new programming, and the money needed to keep the institution open. [emphases added]

The endowment is too small to help, and development seems to have failed.  In addition, Titus cites this context: “small liberal arts colleges and universities across the country continue to face significant financial challenges.”

Note, too, the emphasis not on a national or international student body and institutional reach, but on a very local setting:

We have struggled, yet survived, for decades because of our strong commitment to our students and the southeast Iowa region…

These decisions may have a profound impact on students, faculty, staff members as well as the entire southeastern Iowa community. Iowa Wesleyan’s economic impact to the southeastern Iowa region is over $55 million annually.

Unusually, student numbers don’t seem to be a problem, according to IWU’s president: “[o]ur enrollment has doubled, student retention has increased…”  Without knowing much about their financials, I would hazard a guess that their student numbers have increased, but at the price of lots of aid and a rising discount rate.  In other words, greater enrollment but not increased revenue.

Second, two Oregon art colleges are going to merge. The Oregon College of Art and Craft and Pacific Northwest College of Art will become one new institution.  OPB cites “a national trend toward lower enrollment and rising costs.”  For example,

OCAC Interim President Jiseon Lee Isbara indicated financial factors led her college to the negotiation table.

“By any measure, OCAC is in a place that needs to explore proactive solutions for a sustainable future,” Isbara said. “The current higher education environment has proven to be precarious. We believe the merger will strengthen the merged colleges’ future.”

The article also mentions national context: “the pain that’s hit higher education in recent years.”

Will any faculty or staff cuts occur?  That seems likely, if realizing economic efficiencies are in order.

Third, another queen sacrifice: Savannah State University, a historically black institution, will lay off twenty-six (26) faculty.  That’s about 6% of 385 full-time instructors, according to Wikipedia.

Why is this happening?  Declining enrollment as well as declining state support:

Officials announced that the university would be “realigning its resources” in light of two consecutive years of declining enrollment and state-allocated funding. The university’s enrollment saw a 10.6 percent decline in fall 2017 and a 7.9 percent decline in fall 2018.

For state support, the Atlanta Journal-Constitution reports these figures: “Savannah State’s total budget declined from nearly $121 million from the fiscal year that ended June 30 to about $107 million this fiscal year, according to state data.”

I haven’t been able to find out which departments are suffering the cuts.  At least one source claims the riffed professors are not tenured; it’s not clear if they were on or off the tenure track.  (This source also finds them to be non-tenured)

None of these campuses are elite institutions.  They won’t receive the sustained media scrutiny and academic discussion that attends every move by Harvard or Stanford.  But they represent thousands of human lives, as well as academic institutions every bit as meaningful.  We must not allow American post-secondary education’s pecking order to drive them from our consideration.

Queen sacrifice, merger, closure: these are examples of trends I’ve been tracking for years, as my faithful readers know.  Each of these stories comes from a distinct campus with its particular local contours, but the trends are nation-wide and persistent.  The forces of changing enrollment and declining state support continue to wreck havoc.  The American higher ed crisis rolls on.

Are we in the midst of a market correction, as colleges and universities adjust painfully to the new order?  Is this kind of news now the new normal?

(thanks to Mark Rush and Matthew Henry)

Posted in research topics | Tagged | 3 Comments

This week in Denver: join me at EDUCAUSE

Over the next few days I will be in Denver, Colorado, at the EDUCAUSE anual conference.  I’ll be doing plenty of things, and hope to meet some of you, human sized or otherwise:

Bryan with Denver blue bear

We bears always find each other.

Tomorrow – Tuesday, October 30 – I’m leading a preconference workshop, Digital Storytelling, Educational Technology: The State of the Art.  It will run from 12:30pm – 4:00pm local time.  Here’s the official description:

What does digital storytelling mean for education, 25 years after its creation? This session will cover digital storytelling’s state of the art, including established practices in assessment, support, technology selection, and program development.

Mark Corbett Wilson, longtime friend of the Future Trends Forum, will co-teach the workshop, and I’m really looking forward to this collaboration.

Then this Thursday (November 1st) I’m trying an experimental session.  It involves two of us, and is called The Pragmatist and the Futurist.  I’m the futurist; the pragmatist is my splendid friend Michael Johnson, author, podcaster, and consultant.  (Michael wears a serious beard; our initial title for this session was “Between Two Beards”)  Together we’re going to explore major issues around education and technology strategy:

In this interactive conversation two experienced consultants share their observations about the emerging challenges to college and university’s sustainability in the 21st century. Their different experiences and perspective offer two distinct ways of approaching institutional sustainability. How can institutions survive and even thrive in a changing climate? How is the academic business model altering? Will institutions disappear? Will technology serve sustainability or drag it down? We also explore the changing nature of work and what that means for education. Are campuses ready to offer truly lifelong learning for adults who increasingly need to reskill? How do colleges and universities respond to the growing gig economy? How are business demands for learning changing? How do workers use technology to learn, and how can higher ed best deploy digital tools to prepare students for that experience?

This is also a Future Trends Forum live session, so we’re connecting that population interactively to the conference experience.  If you’re not in Denver, you can join us here.  You can add questions and comments remotely.

Meanwhile, I will be all over the conference floor, meeting people, taking in sessions.  Let me know if you’re there!

Denver night sky

Posted in travel | Leave a comment

Looking ahead to 2040 and beyond: some hesitation

Thinking about forecasting, here’s a draft section from my upcoming book.  Consider it some throat-clearing and/or humility:

If exploring the near- and medium-term futures of colleges and universities is both daunting and requiring of extensive analysis, to attempt to look further ahead should inspire true humility.  Starting with the largest possible scale of analysis, planetary civilization, involves modeling the possibilities of climate change, which already necessitates an enormous scientific endeavor. That research has established the likelihood of a one or two centigrade temperature rise over the next few decades, a warming which could drive larger numbers of climate refugees to move across international borders.  Similar movements over the past few years have already changed the face of politics in Europe; we can anticipate at least as much social and cultural stress, on top of large scale human suffering.  This temperature rise is likely to trigger agricultural crises stemming from crop failures due to excessive warmth or encroaching aridification; that, too, could also inspire political, economic, and social unrest.  Anticipating and mitigating these possibilities – should a given polity decide to do so – then stimulates yet another level of political, economic, and social change.

Looking at change drivers for geopolitical structures and events other than those caused by the Earth’s changing climate involves a small galaxy of possibilities.  We have touched on several of these throughout this book.  The growing age gap between developed and developing nations; rising and unevenly distributed income and wealth inequality; the tension between those seeking to extend and deepen globalization versus neonationalists and localists; growing illiberalism in many political environments; areas of rising religious belief and practice versus regions of growing religious unaffiliation; the battle between corruption and law enforcement; the continued struggle for women’s rights; traffic in multiple illegal substances: all of these, and more, offer ways for shaking or reaffirming certain elements of the world order.  Individual nations and regions provide myriad opportunities for change, too, from long-standing border tensions (Israel and its Arab neighbors; China and India; India and Pakistan) to political instability (sub-Saharan Africa, some of the Middle East) to transcontinental projects (China’s Belt and Road initiative). The number of possible alterations to the present-day political settlement ramify accordingly.

Alongside and intertwined with these forces is the ongoing technological revolution, a domain which offers yet another realm of colossal complexity.  Attempting to forecast the digital world of 2035 from 2018, a gap of nearly twenty years, runs risks along the lines of anticipating the technological environment of 2018 from that of 2001.  In that year most Americans were slow to imagine the mobile revolution, even while mobile phones swept the rest of the world.  The dot.com bubble has just burst, which chastened many formerly expansive imaginings.  Virtual reality of the 1990s had failed massively, and few saw it proceeding again.  The web was growing rapidly, but remained largely in its noninteractive, document-centric mode; the more social, easier to publish environment of what would be dubbed “Web 2.0” was just beginning to surface.

If we wish to look beyond 2035, we would do well to augment our caution even further and imagine glimpsing 2018 from as far back as 1984. Visionaries of the Cold War’s last decade did manage to foresee certain features of our time.  Futurists like Alvin Toffler augured a shift from manufacturing and towards a post-industrial economy, which has largely transpired, at least in the developed world.  Others focused on the looming threat of nuclear war.  Most failed to pay attention to China’s recovery from its Cultural Revolution and turn towards modified capitalism, which would become of the great stories of our time.  Science fiction writers of the then-emergent cyberpunk school envisioned a deeply networked future world dominated by large corporations and suffering from threats to civil liberties, which is actually quite prescient.  Those writers largely missed mobile devices, however, tended to overstate AI’s actual realization, and did not see the world-changing World Wide Web.  Looking back at historical futuring gives us some retrospective caution in looking forward now.

Cautiously, we can suggest some technological possibilities based on the frameworks that seem most durable now, and on extrapolating from some current initiatives. The Fourth Industrial Revolution model, for example, posits continued movement away from a classic manufacturing-based economy and towards a society reshaped by multiple forms of new technologies, most especially automation from AI to robotics.  Clearly there are many powerful forces driving such a transformation in the present, as we have discussed in chapters four and five.  There is a great deal of cultural and financial capital invested in this revolution. Technological invention continues to progress.  If we assume only incremental advances along these developmental lines, rather than chaotic disruptions, we should anticipate a transformed world, with a baseline of widespread DIY manufacturing, artificial assistants, widespread use of robotics in professional and personal lives, rich multimedia production and experience.

Naturally we should also anticipate a range of cultural responses to a Fourth Industrial Revolution.  Automation alone offers multiple ways forward, assuming that set of technologies and practices succeeds on its own terms (and the possibility of a major automation crash is one we should anticipate).  For example, we have already imagined, and are presently working towards, various forms of recapitulating some degree of human identity in silicon.  One concept involves starting from the mass of expressions a given person creates during decades of life – text messages, emails, phone calls, blog posts, Instagram photos, video appearances, and so on – then using software to determine that person’s distinct style, their expressive voice, in order to repeat it after death in a kind of machine learning memorial.  Extrapolated one step further, we can envision virtual advisors from the past who assist us in our work, or grief therapy programs based on living people interacting with mimetic forms of the deceased.  Another concept sees software simulating a more generic or less representational human being, a virtual person living a digital life, which can then be put to various uses, from study to work; with some degree of autonomy such emulations could well develop their own society.

Looking at this historical transformation from political and economic perspective, our present political and social arrangements and the historical record of the first through third industrial revolutions offer several different civilizational reconfigurations, as Peter Frase and others have suggested. If automation renders many jobs obsolete, human creativity could respond by creating new functions, jobs, and professions for carbon-based life to perform.  After all, hardware and software need some degree of managing, and the social impacts of automation will transform current human needs while creating new ones, which emergent professions could meet.  Alternatively, automation’s successes may be limited to ways of assisting rather than replacing people, enhancing work rather than outmoding workers.  In this future we could work closely with machines in some form of cyborg relationship, either literally through implants and ingested devices, or metaphorically, as we come to depend ever more intensively on software, data, networks, and hardware to perform our various tasks.  Machines would closely empower our work and lives.

We can also imagine a socio-economic elite powered by automation and related industries, dominating a society consisting largely of disempowered poor or working class people kept in line through a mixture of rich entertainment and ubiquitous surveillance.  This could become something nearly medieval in scope, with a social base of impoverished techno-peasantry and a vanishingly small middle class above.  Social media by itself would perform that mixture of pleasant distraction and data-driven monitoring.  Other technologies could be pressed into service: AI for more ambitious data-mining, robotics as a police force, and tiny networked devices for extensive surveillance.  From this future we would look back and see the early 21st century dystopian literature wave and the warnings of Carr, Lanier, et al as eerily prescient.

Alternatively, a less dystopian version of this automated inequality world would see most workers freed from many historical drudgeries thanks to automation’s successes, and leading healthier lives.  In fact, some futurists see such a world as one positively liberated by automation.  This is where the universal basic income (UBI) idea enters discussions, based on various plans to guarantee all residents (or citizens, a crucial difference) of a given nation or region a sufficient cash transfer to maintain a basic existence.  UBI proponents often pitch their idea as a response to automation’s capacity to render human workers obsolete and the possibility that we will not generate new professions.  The average work week may fall from the classic 40 hours to 30 or 20.  Alternatively, more people may alternate periods of full time employment with seasons of unemployment.  A UBI system would tide people over these compensation shortfalls.  Moreover, without an existential requirement to work for pay some of us may choose to pursue non-remunerative tasks, from writing a novel to learning a foreign language, spending more hours caring for loved ones or conducting a religious pilgrimage.  UBI could usher in a new dawn for human potential: quite the knock-on effect from automation’s potential triumph.

Automation could yield another range of possible mid-century worlds, wherein devices and software progress even further, augmenting the world with a posthuman ecosystem.  Imagine machines handling many of today’s human tasks, but better: hauling cargo in redesigned vehicles, growing crops, diagnosing human and animal illnesses, building colonies on Mars and the moon, performing surgeries, all more safely and efficiently than humans could do.  Software produces nonfiction and creative art, manages the economy, patiently counsels and instructs humans.  We have seen horrific versions of this in fiction, such as Capek’s human-exterminating robots (1920), the Terminator movies’ genocidal Skynet (1984ff), and the “benevolent” tyranny of Colossus (1970), but popular culture has also produced more positive visions of a posthuman society, most notably Iain Banks’ far-future Culture sequence.  This fiction brings to mind a deep question: faced with being kindly outmoded by our technological creations, how would humans react, psychologically and culturally? Would we rage against these devices, as Victor Frankenstein snarled against his much more articulate monster?  Would we instead accept our new status and launch a society-wide vacation, a la Wall-E (2008)?  This is a question the university is supremely well suited to explore, given the intellectual depth of our many disciplines.  Imagine a curriculum based on the new, post-human age, and how history, computer science, sociology, literature, philosophy, and economics might teach it.

Yet we must be cautious about these mid-century visions, since they are based primarily on certain possible ways we could restructure our world based on only one technological domain, that of automation.  Consider instead the futures driven by other technologies currently in development.  A Facebook team seems to be making progress in developing a device to allow hearing-impaired people to experience audio communications as haptic vibrations, either returning to them the sense of sound or producing a new, sixth sense.  How else might we enhance the lives of the disabled, or extend the range of human experience?  Research into brain science has allowed early methods of physically intervening in human cognition, leading to explorations of altering mental states, connecting minds directly to computers, or linking minds together.  The potential for torment and abuse here is vast, as are, once more, the possibilities for expanding what humans can do in the world, not to mention exploring the old dream of teaching by sending information directly into the brain, a la Neo in The Matrix (1999).  Meanwhile, the long-running field of genetic engineering, frequently the source of dystopian imagination (Gattaca, 1997) and ethical conundrums, is developing new powers through CRISPR technology.  We can, perhaps, redefine human and overall biological life on Earth.  Adding other technologies and practices to this mix – psychopharmaceuticals, advanced artificial limbs and organs, 3d printed anatomy, the internet of things installed within bodies – and what it means to be a human being in 2045 would be a radically different question than it did when posed in 2018.  Once more, what other institution is better positioned to guide us through such extraordinary challenges than the academy?  And to what extent will colleges and universities shape such a future through research, producing technologies, practices, and concepts?

At the same time the biological world may be further inflected by changes in large-scale material science and new projects.  Ever-shrinking computational devices may lead us to mobile and networked machines small enough to be ingested, that can conduct medical work on the human body.  Even at scales larger than the dreams of nanotechnology we can imagine transforming the physical world through the deployment of networked mites too small to be seen by the naked eye, perhaps leading to the advent of materials that can be addressed remotely or function autonomously, a/k/a “smart matter.”  3d printing could literally reshape aspects of our built environment, as might the use of new materials, like very strong and light graphene.  New materials may well be needed, as currently under consideration for mitigating climate change are massive geoengineering projects, such as adding saline to an entire ocean, building region-scale seawalls, altering the planetary atmosphere’s chemical composition, or installing a massive shade in deep space between the earth and sun.  To reach space at all we currently use fairly risky rockets, the dangers of which have elicited experiments and designs for everything from reusable spacecraft to atmosphere-straddling space elevators.  New entities are participating in a 21st century space race that barely resembles that of the 20th century: corporations, billionaires, and nations building programs for the first time. These potential innovations or transformations could impact higher education in multiple ways, starting with altering a campus’ physical plant.  College curricula and student career services would likely develop programs to support learners who seek to work in those new fields. The development of any or all of these projects will draw heavily on academic research and development.  Further, many university departments will be able to contribute to the selection of and critical assessment of such epochal projects, from political science to philosophy and sociology.

All of these possibilities are based on trends that we can perceive in the present day.   Meanwhile, beyond those evident change drivers, black swan possibilities also lurk ahead.  Historical examples abound, such as a leader’s sudden death by accident or assassination that unravels a political order.  A new religious sect or the vigorous reformation of an existing faith can win adherents and upend societies.  Beyond political and social causes, a pandemic that exceeds our medical containment capacity could not only constitute a humanitarian disaster but also sap regimes, shock economies, and electrify cultures.  On the other hand, a medical innovation might save or extend lives, such as a cure for congestive heart failure or a therapy that ends Alzheimer’s.  Many natural disasters have so far been handled without disruption by our current national and international systems, but larger scale ones are quite possible and potentially devastating, such as cometary or asteroid impact.  Climate change proceeds slowly, yet an unlikely and sudden shock, such as the Atlantic Ocean’s thermohaline circulation  system shutting down, could yield a range of powerful impacts. And, since by their very nature black swan events are challenging to anticipate, we may well be hit by a completely strange development, one which to us in 2018 is a Rumsfeldian unknown unknown.

Our digital world may be especially vulnerable to these low-probably and high impact events.  A solar coronal mass ejection of sufficient size could damage networks and devices over a large geographical range.  An electromagnetic pulse (EMP) could remove a target completely from the internet and the electrically connected world for a short period of time, leading to potentially catastrophic results.   Imagine a city or state not only forced offline (no banking, email, documents, voice calls) but cut off from electricity: no lighting, refrigeration, air conditioning; no use of cars or aircraft.  The immediate medical consequences are, to pick but one result, dire. Digital attacks conducted by national governments and their military or intelligence agencies (cyberwar), by organized crime, by other non-state actors, or by organizations that have yet to appear could crash major networks.  If any of these occur at sufficient scale, a social disaster could unfold, given the deep dependency we now have on the digital world.

Amid all of these possibilities and others, and in all humility, we must consider the ways forward higher education might develop after 2040.

Posted in futures | 6 Comments

The Adams game: an exercise for educators in 2018

I’d like to propose a game or exercise.  It can be played in person or online, including in comments added to this very blog post.

It starts with this famous 1780 quotation from John to Abigail Adams:

I must study Politicks and War that my sons may have liberty to study Painting and Poetry Mathematicks and Philosophy. My sons ought to study Mathematicks and Philosophy, Geography, natural History, Naval Architecture, navigation, Commerce and Agriculture, in order to give their Children a right to study Painting, Poetry, Musick, Architecture, Statuary, Tapestry and Porcelaine.

How might that statement change in 2018?

Adding women, of course. Some might update the spelling.  The bigger challenge is thinking about today’s curricula.  What is our equivalent of studying politics and war?  Perhaps learning business and coding.

Consider, too, what we forecast to change over the next 50 years (approximating the quote’s two generations: “my sons” and “their Children”).  If today’s Adams must study business and coding, should their children learn, say, information management (to get a job wrangling automation) and Chinese (to speak with the world’s next superpower)? And then their children – do they turn to the humanities and arts, like Adamses 3.0?  If so, which humanities and arts?

Adams quote

A scan of the letter itself.

The advanced version of this exercise situates the speaker in a certain context.  You might not be in a position like John and Abigail Adams in late 18th-century America. What’s the updated quote for a Latinx adult learner in the US, as opposed to an eighteen-year-old male Briton, or a female war veteran?

An alternative version of this game reverses course.  We start with previous players’ answers, then analyze them to determine their assumptions about education, work, politics, and culture.

Who wants to take the first turn?

(thanks to Mike Sellers for support and for hashing out the idea)

 

Posted in gaming, research topics | 24 Comments

Are student loans succeeding? The Roosevelt Institute thinks not.

In the United States a major way we fund higher education is through student loans.  This strategy has become central as prices soared, states reduced funding to public universities, and as more people flocked to campuses (until around 2012).  So how is this strategy working out?

Roosevelt Institute logoA new paper from the Roosevelt Institute argues that the focus on student debt has ultimately failed in its purposes, and is now actively harming American society.

Why and how do they see this?

One reason can be found in the context of rising economic inequality.  Overall, the report finds that worker “earnings have been either stagnant or declining for every level of educational attainment during the last 18 years…”  In fact, “as student debt burdens have increased, wages have remained stagnant or fallen…”   This is a huge claim, damning the student loan effort as a failure, at least in economic terms.  Simply stated, “[r]ising levels of student debt have not resulted in higher earnings.”

Morgan and Steinbaum develop this point further, arguing that the college premium (the lifetime earnings boost from having a degree)

was mostly driven by a drop in wages for workers with only high school educations rather than substantial increases in wages for those with college credentials. In other words, the value of a college degree increased because the cost of not having one increased

In a Vice interview one of the report’s authors, Julie Margetta Morgan, explains their analysis:

It’s no longer this thing of, I’d like to earn a higher income, I guess I’ll go to college. It’s like, I have to go to college in order to not end up in poverty—and I’m also forced to take on debt to get there.

In other words, it’s not that higher ed boosts people, so much as those without post-secondary degrees are falling behind.  We are misinterpreting the widening gap between the two as progress.

A second point is the sharp generational difference in debt holding. “Student debt affects a much larger share of the young adult population than it used to…”  This isn’t shocking news for my readers, but is an essential reminder.  Morgan and Steinbaum offer this telling visualization:

holding student debt

Remember the recent Bloomberg analysis of student debt?  This giant debt bolus is now crammed into our youngest generations.  Their lifetime earnings are cramped by it, which means their lives will most likely be constricted, and the overall economy held back.

A third point: debt is strongly inflected by race.  “[T]his expansion is particularly evident among people of color.”  Black and Latinx populations hold more debt than other groups.

The student debt crisis is a profound policy failure for everyone, but denying its existence is a particular injustice to racial minorities who have borne the brunt of that failure, as they previously did of the housing bubble and its deflation in the 2000s.

Fourth point: a growing number of debt-holders aren’t paying down their holdings.  “[A]n increased share of borrowers with positive debt balances aren’t making any payments or are making payments that won’t be sufficient to retire their debt,” which certainly represents a policy failure for the student loan effort.

There’s a fifth point I’d like to note from the paper, namely that this economic transformation (rising student debt, stagnant wages) has a labor politics.

Making higher education more responsive to the stated needs of employers worsens, rather than rectifies, the student debt crisis by reducing workers’ power in facing a labor market where employers hold all the cards.

In other words, in today’s economy graduates are not so much empowered as they are seeking to be hired by truly powerful employers.

The paper’s conclusions are dark, especially for campus strategy.

[U]niversities have responded to this incentive structure by creating a plethora of new graduate degrees that ostensibly serve to distinguish their graduates in a crowded labor market but, in fact, work to extract yet more tuition from students while cheapening the value of the credentials they supersede.

The push for more higher education attainment has led to widespread credentialization, which students pay for in many ways.  In an interview one of the authors refers to our debt-for-degree approach as “a failed social experiment”.

Overall, “The Student Debt Crisis, Labor Market Credentialization, and Racial Inequality” offers a grim critique of where American higher education finance now stands.

Is it correct?  And, if so, what happens next?

Posted in economics | Leave a comment

American birthrates decline again

American fertility rates have continued to decline, according to new data just published by the Centers for Disease Control (CDC).  This has powerful implications for the future of the nation and world, as well as for higher education.

One key finding: fertility rates have fallen across all geographical regions:

Yes, city people are having fewer children than rural folks, as is pretty typical.

Each of those new rates is below replacement level, the number at which a population sustains its numbers.  Without immigration the total number of American residents will now shrink – not grow more slowly, but actually get smaller.

A second finding: birthrates dropped across the three races measured in the study, white, Latinx, and black.

The Washington Post quotes one analyst’s reaction to this aspect:

William H. Frey, a demographer with the Brookings Institution, said that what struck him about the new report is the figures on Hispanic women, who have traditionally had high fertility rates. From 2007 to 2017, Hispanic women experienced a 26 percent drop in fertility rates in rural areas, a 29 percent drop in smaller metro areas and a 30 percent decline in large metro areas.

He said the fertility rates for Hispanic women in urban areas are now below the “replacement rate” of 2.1 children per woman, which would keep the population stable.

The Latinx population remains the second-largest in the US, but its growth rate is slowing down.

A third point: the age at which women give birth continued rising.  Comparing 2017 to 2007 date the CDC found birth age “rises of 1.7 years in rural, 1.9 years in small or medium metro, and 2.4 years in large metro counties.”

This indicates, among other things, that teen pregnancy continues to decline.

What does this CDC report suggest about the future?

We may be seeing much more serious political stresses over childbirth, fertility, and immigration, especially if the latter drops under Trump.  The US might well follow the demographic patterns already carved out by Japan, most of Europe, and other developed nations: an aging and possibly shrinking population.

For higher education, we will continue to face a fundamental pressure on the traditional-age undergraduate population.  As I and others have been saying for years, this has profound implications on college and university teaching, recruitment, and funding.

On the flip side, the Post article concludes with the suggestion that the US might invest more in education, in order to increase economic productivity.

John Rowe, a professor of health policy and aging at Columbia University Mailman School of Public Health… said that some other wealthy countries, such as Japan and Germany, are grappling with low fertility rates, and there’s a lot to learn about how they have managed their smaller workforce to maintain high productivity.

“The emphasis should not just be on the number of people but their productivity. So we have to invest in education to enhance the productivity of younger individuals to compensate for reduction in numbers,” Rowe said.

This might not work well, however, given enrollment challenges and widespread anxiety about education costs.  Americans might not buy the idea of spending more on education, especially as other demands (health care, senior services, crime, roads, etc.) compete for non-plentiful dollars.  I’m not convinced most of the nation is ready to pay higher taxes to send a larger population to more colleges and universities. But I could be wrong.  We might see a doubling down on the modern call that “everyone needs college,” if this argument is persuasive.

We might also see more calls for Americans (i.e., women) to have more children, as I’ve noted previously.  One sign of this comes from a New York Times opinion writer. In a column comparing birthrates in Europe (low, falling) and Africa (high) Ross Douthat calls for European women to start having more children: “anyone who hopes for something other than destabilization and disaster from the Eurafrican encounter should hope for a countervailing trend, in which Europeans themselves begin to have more children.”

Why?

This would not forestall the near-inevitable northward migration, but it would make it easier to assimilate immigrants once they arrived — European economies would be stronger, ethnic polarization would not fall so dramatically along generational lines, and in politics youthful optimism and ambition might help counteract the fear and pessimism of white Europeans growing old alone.

There’s a lot to unpack there, starting with silence on ramping up education instead of babies, as well as a touch of intergenerational struggle.  But Douthat then immediately undermines his own call by adding:

Of course government efforts to raise the Western birthrate, France’s included, have been no more obviously successful than Western-sponsored efforts to cut birthrates elsewhere in the world.

I’m not sure where he wants to go with this.  Possibly he’s hoping for a cultural shift, or at least a religious one.  I doubt that most Americans will make that shift, especially since so many women are working.  Douthat is coming from a religious angle, I think, yet rising generations are less likely to be religiously affiliated.

Educators need to follow this closely.

Posted in demographics | 2 Comments