The following is a transcript of the sermon written and delivered by Harmony member Rob Rogan on Oct. 1, 2023.
So it has been a minute since I’ve done a sermon up here, and I’m kind of excited to get back to it. And then I’m going to throw a disclaimer out there. I do like to do these. Some of the lines of logic and scenarios I’m going to throw out here today, may be considered too dramatic. They may be considered unrealistic, or maybe not.
But I ask that even when I intentionally get a little hyperbolic, that we allow ourselves the space to run down these logical rabbit holes. And at the end of our discussion, we consider if we agree with them or not.
But if you’d give me a little artistic license, I appreciate it.
Also, before I do get into it, I do want to give some credit for some of the source material to Israeli historian Yuval Harari. If you don’t know him, he’s just a really, really interesting guy to listen to. He’s got a really straightforward take on a lot of things usually from a historical lens. And I highly recommend he’s got a lot of content out there.
So– and I am actually going to play a couple clips with him towards the end. So you’re going to see him. All right. So with that out of the way, I’m going to go ahead and dive in. All right.
So, question, how many here have seen the Stanley Kubrick classic film 2001? All right. Maybe half. OK. By all accounts, it’s probably one of the more groundbreaking movies of all time.
For those who haven’t seen it, I’d offer, it’s a fairly obtuse film. It’s filled with imagery as much as it is story. And it’s really designed to evoke emotion, maybe even more than it is to be logically interpreted. Kubrick himself stated that if anyone walked out of the theater on the first viewing and understood the film, that he failed as a filmmaker.
However, what is quite clear is the representation of technology as this two-sided sword that hurts humanity as much as it helps. I particularly marvel at the imagery of the first act of the film. The whole first scene, maybe 20 minutes, is now a line of dialogue. And it’s kind of this powerful story of depicting the primitive descendants of humans that learn to use bones as tools for the first time and then immediately use that to kill fellow primates and acts of rage.
Now, most remember the more infamous third act of the movie where the dangerous computer known as how becomes one of the first cautionary tales of AI and cinema. In the film, the soothing voice and complex abilities of the HAL 9000 computer controls every system and device around our astronauts in space.
And while Hal is compliant in his requests and gentle in his demeanor, Kubrick masterfully makes the viewers feel so uncomfortable by showing how helpless the astronauts were. Even in scenes where Hal is kind and benevolent, we feel how the astronauts are completely reliant upon the technology.
Now, this movie, which was released 55 years ago, really did a bad job at predicting the year 2001. But perhaps now in 2023, we are at a time where technology is approaching an uncomfortable relationship with humans. And why intend to portray today a cautionary view of technology?
I’m not going to talk about the kind of uprising violent technology featured in pop culture cinema, like the Terminator, the Matrix, or other more common sci-fi human versus AI tropes. Instead, I intend to discuss technology, even in concentrated human control, that represents a more gentle but dangerous Hal-like risk to the very structures of our society. And maybe what we can do about it.
So let’s kind of start at a societal level. So I’d offer that 15 years ago, many people in the Western Hemisphere considered the future of our world pretty clear, that liberal democracy and capitalism and the values of liberty and freedom would spread throughout the world, bringing peace and prosperity with it. And just to be clear, when I say liberal democracy, I don’t mean politically liberal left. I mean the classically liberal.
So when we typically define liberal democracy as the idea of a government by the people, but with limited power and with individual rights and freedoms guaranteed in a constitution. So I’m going to keep using that term liberal democracy, and it’s always going to be in those terms.
And so when looking over the last 125 years, it’s easy to understand why many felt that liberal democracy was really destined to overtake the world. At the start of the 20th century, there were three structures predominantly held around the world. There was fascism, communism, liberal democracy.
By the middle of the century, that had really narrowed down to a form of communism and a form of liberal democracy, but by the end of the century, liberal democracy and global capitalism had really become the dominant ideology of the world. It has consistently brought economic fortune, reduced violence, and improved standards of living everywhere it has been. The natural assumption for the future to be the continued spread of this to all parts of the world bringing liberty and prosperity.
And just when we maybe think we all had it figured out, the last 10 years has shown signs that maybe the idea of the future isn’t so predetermined. Nationalism, xenophobia, a backtracking of women’s and LGBTQ rights are all warning signs. Despite its success in the 20th century, perhaps the current structures of liberal democracy are not prepared to handle the problems of the 21st century in this showing cracks of failure.
Now, I will start with the recognition that our system of a liberal democracy has required significant tweaks to work as well as it does today. A hundred years ago, it really only served upper class white European males. And over the last hundred years, the Western form of liberal democracy has changed to recognize human rights must extend to all. And that social welfare must extend beyond free market capitalism to provide services to those less fortunate.
Laws protecting women’s rights, racial rights, LGBTQ rights and other protected classes all helped the historically marginalized. Antitrust laws, the 40-hour work week, child labor laws, OSHA, financial regulations and a social welfare net all helped curb capitalism for the masses
These structural changes to our Western world were achieved through significant pain and effort. But these solutions seem kind of somewhat self-evident to us today. The coming problems of the 21st century may be even more difficult to solve and require greater effort and further systemic changes.
Now, to frame the discussion of why technology poses such a risk towards liberal democracy, I would like to posit that the core bedrock of our modern world, Western world, relies on three principles: The people know best, you should follow your heart, and the customer is always right.
And by that I mean democracy relies on the majority voting for the policies and the people to run our government and that is built on the concept of “the people know best.” That liberal freedom is based on the idea that job and class mobility and that those who follow their heart will lead what is best for them in our society. Finally, capitalism uses consumers as the basis for picking which company is producing the best product and ultimately believes that the concept that the customer is always right to drive continuous improvement into products and corporations themselves.
These principles have formed the basis of success of our Western world, but at the heart of all of them rely on the independent judgment of people. The question for today is what happens in this century when the will of human beings will be successfully hacked by technology.
So what do I mean by hacked? No, I don’t mean technology is going inside voting machines and changing votes and I don’t mean they’re going to take control of our mental functions or take control of our bodies. I’m not talking about that. When I’m saying hacked today, I’m talking about in terms of the ability of technology or the people who wield technology to reliably control our decisions and actions through only external input and ultimately the challenge of what it means for those three foundations if we are truly hackable.
So let me start as I say, this is not a new concept. We have spent our whole lives with third parties trying to hack our desires, right? Advertising executives have spent much of the last 75 years trying to do it. For centuries, nations and religions of the world have found ways to hack the human hearts in ways that would have all of humankind except suffering or cruelty beyond what we would imagine. While at the same time also inspiring us to build some of the greatest structures in the theory of tremendous acts of charity.
But in all that time, it was selling a universal message and through all that human beings were still individuals making choices. So a government or a church could put forth a single universal message such as salvation and that relied on the universal appeal of that message by all the people in hopes that their individual choices would align with the desires of the church or the government.
Now today, a couple things are happening side by side that make a changing method of this hacking of our desires such a risk.
So first, as the digital age progresses, all of us are supplying vast sums of information about ourselves, that we are equipping technology and the central humans who run that technology with everything they need to break into our code.
Secondly, we are increasingly turning over our decisions to machine algorithms. So the first time we ignore directions from Google and we don’t exit the highway and we get stuck in a traffic jam, we surrender our driving directions to Google going forward. When we are paralyzed by endless entertainment options, we let Netflix decide for us what to watch. When we need to write a letter of recommendation for a colleague, we let chatGPT do it. When we want news, we let Yahoo, our Google algorithm, determine what to read. When we can’t lose weight or we want to improve our health, we let technology tell us when to eat, what to eat, and when to exercise.
Why do we do this when clearly technology doesn’t make perfect decisions about what we want? Well, we also recognize that no machine or algorithm is perfect to decide what we want, that the point of inflection to turn our decisions over isn’t perfection, but when the algorithm is better at deciding than we are.
And you can ask yourselves how many things in your life is that already a true statement? So if you combine the increased power of all the algorithms in our lives with the data we are freely giving away about ourselves, consider the power of technology companies as technology itself will develop.
As we sit here today, how many of us are not only wearing our tracking devices or where we go in our phones, but even our biomedical sensors that detect our step count, heart rate and sleeping patterns. Is it so far off to imagine interfacing our Apple watches to our Netflix devices so that it can understand what programming gets our heart pumping? Isn’t it reasonable to extrapolate this trend further and further?
And at some point, the artificial intelligence or technology companies of the future will understand our own bio processes so well that hacking our desires will be simple for them.
So the optimistic view of this is what an incredible future we’re moving towards. It’s like Huxley’s Brave New World. We will increase happiness by optimizing our every waking moment. The algorithms of AI and big tech companies will constantly be by your side and you will truly believe that it is here to make your life better.
The question of whether this is a dream or a nightmare might become blurry and Hollywood, excuse me, and while Hollywood loves us to fear the gunslinging Terminator war with technology, isn’t this slow creeping takeover just so much more likely of what’s gonna happen?
We can ask ourselves how far the tech giants and AI will have to burrow into our lives to truly hack our desires, but I think we all know we are slowly giving them the keys to our decisions and that they will have gained such trust with us. How will we know when they start lying to us?
So if we return to the way those in power in the 20th century attempt to influence with universal message to a universal audience with individual choices, the technology of the 21st century will allow individualized messages to an audience of one by an agent that is growing to know you better than you know yourself and who you have been conditioned to help them make the decision for you.
If you consider the atrocities committed over the last thousand years in the antiquated historical method of hacking our desires, I shudder to think of what could be possible such an increase in power. And if we return to the foundations of liberal democracy, I posited a few minutes ago, this potential hack of our desires breaks down these fundamental principles of what makes the Western world work, what does follow your heart, the customer is always right and people know best. A world where your heart, your choice and your vote can be easily manipulated.
The algorithms and external systems that they govern cannot just predict my decisions. It can also manipulate my feelings, my emotions. A dictator may not be able to provide me with good health care, but you will be able to make me love you and to make me hate the opposition.
The democracy will find it difficult to survive such a development. Because in the end, democracy is not based on human rationale It’s based on human feelings. During elections and referendums, you’re not being asked what do you think? You’re actually being asked how do you feel? And if somebody can manipulate your emotions effectively, then what proceed will become an emotional object?
So whether you find all this hyperbolic or not, it may be worth asking if you believe that the calm, seductive voice of the Hal 9000 computer, that controls every moment of your life is a more realistic and frightening potential than the Arnold Schwarzenegger Terminator.
And if you see the risk laid out here, even in a small percent, the question is what can we do about it to potentially alter where this could be going? And I would offer there’s at least probably four key areas to consider.
So one, maybe obvious is regulation. We all need to push for AI regulation and tech breakup, quite frankly, while we still can. Starting in September, most of you may know, Google has been on trial for its monopoly of the internet. And for the most part, if you read it, their defense is, yeah, we know we’re a monopoly, but it’s really because we’re that good, that you all just want us. However, if you look, any of us who look at Google and use it, you can see that as their monopoly has grown over the past few years, look at the results of their searches, right? How many of it is ad or how many of it points directly to their own products like YouTube?
Breaking up tech companies into smaller and more manageable pieces helps protect the centralization of massive amounts of information that can be so powerful. When the courts can’t help us, it is up to us to pressure the congressional representatives to provide an overriding regulatory system for tech and AI. And we should be voting for those who do support this.
Last year, the White House released an AI bill of rights, but it was purely ceremonial, having no enforcement, no oversight, it was just this kind of whimsical thing they threw out there.
Just a few weeks ago in early September, most of the famous heads of the tech giants met with Congress on the subject of AI regulation. And universally, they all said, we should be regulating this right now. And we regulate the most powerful things and potentially dangerous things in our society right now from transportation to food to drugs to power production. It is criminal that we do not have a regulatory agency for tech and AI development at this point Okay.
Two, stop giving up your data. I know we have heard this message a hundred times and we hear it constantly, but maybe the question is how many of us actually act on that subject? And we may think it’s difficult, and I will offer there’s a number of ways you can do it. If you’re gonna surf the internet, even if you’re gonna use Chrome, you can use browser extensions like U lock Orison, Origin or Privacy Badger to block companies from tracking you. Those are all free extensions you can add.
From a mobile, those aren’t on mobile, though if you’re getting a mobile, adBlock Plus is available for iPhone, it’s a free app that can go in there. I would also offer you could switch to DuckDuckGo as your browser. That used to be just a search engine, but now they offer a browser that is highly privacy oriented to keep them from tracking you.
Furthermore, most tech companies actually do, in a obscure way, allow you to opt out of their data collection, even major companies like Facebook and Amazon, but it’s really overwhelming to go do it. However, if you do, there’s a site called simpleoptout.com, I don’t know if it’s ever been there, but basically they’ve gathered in one location, all the hundreds of links to all the big companies that will allow you to opt out of their data collection.
And this is kind of an example that if you go to their Facebook link, there’s actually three different spots where you have to opt out of the Facebook data collection. They make it particularly up to some purpose just to make it difficult to do, but SimpleOptout is a way that you can go through and try and gain some privacy back.
I don’t claim to be a tech expert, I’m sure there are many more ways that people in this room understand better than I. I just throw out a couple starters there, and I guess I encourage you what we’ve heard over and over again, to research your own techniques and figure out how you want to keep your data yourself.
Three, choose your data source. Some people, this image went around the internet a lot a few months ago, in alarming amount of people choose to get their news, just as it shows up by either Google or Yahoo, or just at the top of their phone, even worse, some choose to get all their news from social media, which is a little terrifying.
The thing is, we all know this is being edited for our eyes, right? And so I would offer that we’re all better if we force ourselves to listen to information outside of our recommended bubbles, even if it’s just to define what we disagree with. I don’t necessarily agree with this graph. The intention is it’s left right on the x axis, left leaning, right leaning, and then kind of trustworthy or factual or pure opinion towards the up down the y axis. I don’t know, I like, the young Turks is over there way in the left and kind of lower, and I like breaking points over towards the right kind of top right.
Both of those, they don’t rate so well, but I’m less caught up in this as much as I am the idea that, I think we would all agree that the larger our diets of information are, the larger we consume across this, probably the safer we all are. And you just like our food diet, if we consume from very few sources, or only very familiar, excuse me, only similar sources, it is likely hurting us in the long run.
And I guess, and just kind of an extension of this, I also think it’s a fun exercise to test your own beliefs and find what information would change them. And I think you can do this right here right now, you can do it anytime. If you consider any belief of a political issue that you hold to be true and important, if you could ask yourself what information would change your position?
Because I think it’s easy to wonder why your political opposite can’t ever change their position on anything. You know, if you looked at and said, what would change your own thoughts on climate or universal health care or abortion, if you can’t define information that would change your position on it, that’s probably a tough spot to be in.
No one ever thinks they are in a cult, but if you can’t define reasonable information that would change it, it’s something to consider.
All right, the last one, I feel like I missed one, I just got there, oh no I did. I missed something. I guess, okay. Fourth and perhaps most important, know yourself. Know thyself was made famous by Socrates and ancient Greece, but it is of incredible importance in the modern world.
By definition, if some technology is going to hack you, then it has to understand you better than you understand yourself. While we can take steps to keep technology from having such a deep understanding of us, the other side of that equation is knowing really well who we are. This implies a deep understanding of our own beliefs, why we believe them, and also means seeing ourselves for who we really are and not for who we hope we are.
Because one of the easiest ways to manipulate humans is by telling them that they are who they hope they are, not what they really are. And it’s really further this point, I’m at one last video, here you’ve all of them, and let us carry us to discussion. So I’m going to let him kind of hit this point.
“Nobody emails, not all records, they hack our feelings of fear and hate and vanity, and then use these feelings to polarize and destroy democracy from within. This is actually a method that Silicon Valley pioneered in order to sell us products.
“But now the enemies of democracy are using this very method to sell us fear and hate and vanity. They cannot create these feelings of nothing, so they get to know our own pre-existing weaknesses, and then use them against us. And it is therefore the responsibility of all of us to get to know our weaknesses, and make sure that they do not become a weapon in the hands of the enemies of democracy.”
Get it to know our own weaknesses. We also help us to avoid the track of the fascist view. As I explained earlier, fascism exploits our vanity. It makes us see ourselves as far more beautiful than we really are. This is a selection.
But if you really know yourself, you will not fall for this type of flattery. If somebody puts a mirror in front of your eyes that hides all your arguments and makes you see yourself as far more beautiful and far more important than you really are, just break that new.
I appreciate your time today, and I look forward to the discussion. So I think we can probably do maybe three groups today, so maybe we can do kind of one towards the back.