Thomas Hübl: Welcome back to the Collective Trauma Summit. My name is Thomas Hubl and I’m the founder and convener of the summit, and I’m very happy to be here with Randy Fernando. Randy, very well welcome.
Randima Fernando: Hi, Thomas.
Thomas: To our summit, I am looking forward to this conversation because I’m passionate about the intersection of technology, mindfulness, trauma healing. I think that’s a great conversation to have. Maybe also today we can add AI as a factor to the conversation, which I think many people might be very interested in. Maybe you tell us a little bit about your story because that gives us a bit of a connection to your own path, how you came from where you started in your career to being so engaged and being part of a movie, doing a lot of mindfulness work, being deeply into tech. Maybe you can tell us a little bit of your story.
Randima: Sure. Happy to do that. I was born in Sri Lanka, and Sri Lanka is a very Buddhist country. And so that’s where I learned the mindfulness side of things. My parents are very strong Buddhist practitioners, and so they introduced me to that side very early. Then also when I was about eight, I started programming because my father was really into technology, and so my mother taught me the mindfulness side and my father taught me the programming side. Those two threads have sort of woven together in my life, I’d say in unexpected ways. I was just really excited about making pictures on the screen, lighting up pixels. This was really fun for me. I did all this programming and just followed that path, ended up here in the United States to do that.
Then I was at Nvidia doing product management for seven years. I authored several books on advanced computer graphics and looked at the hardware. It’s interesting because that same hardware turns out to be the engine, that architecture, turns out to be the engine for the artificial intelligence revolution that’s happening now. So very interesting how these things have gone full circle. I also got a lot more interested in mindfulness. After seven years at Nvidia, I was really interested in the nonprofit space and wanting to find these other ways to contribute because I felt like I was well positioned to do that.
So then I helped to build an organization called Mindful Schools, which I ran for seven years. Again, that was really looking at mindfulness and how can we scale that in the education system. That was really awesome. I did that for seven years, and then have been doing this work at Center for Humane Technology for six years. I had met Tristan I think in 2015. At the time, I didn’t fully understand all of the downstream consequences of the attention economy. I was like, “Oh, I don’t think it’s a good thing. I think this is a problem,” but I didn’t see the impacts on democracy all the way through. When you take the attention economy and run those races for attention for a while, you start to get all of these secondary effects that aren’t that obvious initially.
We can talk more about that, but things like breakdown of truth, breakdown of democracy, polarization, those aren’t things that I saw as clearly in the beginning. And now after six years, you can of course see all of the downstream effects of these technologies that have been part of our society since 2007, 2010. It is been a while, and the consequences have been pretty bad. The other piece that’s been really interesting is that at Center for Humane Technology, I wasn’t expecting that the work with the mindfulness side and the social impact side would be folded in so much into the work, but it turns out that if you want to have a conversation about humane technology, you are really talking about systems.
You’re talking about systems, you’re talking about the mind and human nature, and intention, and attention, and how do we protect it, and how do we align our choices so we can make wiser choices. Everything actually folds in together quite beautifully. I’ve really enjoyed the way it’s come together.
Thomas: That’s so beautiful. I mean, first of all, it’s very rich what you already did. I want to also highlight that it’s beautiful that you felt in the middle of your career that you want to give something into the nonprofit space and that we wanted to contribute to that. That’s actually a great development that I think is remarkable.
Randima: One of the things I feel very strongly about, and maybe some of you listening feel this way, I think people who are well positioned in their careers, companies, whatever it is, there is incredible expertise that the nonprofit sector needs. It’s a sector that can actually align action better with social impact. I think the for-profit sector always struggles with this because of the incentive structures, and people start off wanting to do something good, but because of the broken incentive structures and this idea that you have to keep growing and grow infinitely, at some point the metrics take over the values and there’s this noble mission that someone may start with, but pretty much every time it gets taken over and then things go horribly badly.
This is why I think there’s a lot of room in the nonprofit sector for these kinds of contributions and expertise that is often hard to get in the nonprofit world with the nonprofit salaries and structure. I just really encourage anybody whose kind of on that border, try the jump, see what happens.
Thomas: That’s beautiful. I wholeheartedly agree with you because we are also operating in both spaces, and it’s amazing what you can do when you have both streams together. That’s beautiful. Maybe you can tell us a little bit about, let’s start with the attention economy and all the side effects that you identified, or not all, but some of the big side effects of it.
Randima: There’s so much, right?
Thomas: Yeah, right. But I think that our listeners, we all can be on board with you, what does that mean? Maybe for some people, that’s a concept that they haven’t heard yet, so maybe you can break this down a bit for us.
Randima: Well, the attention economy is really about this idea that attention can be monetized. The idea that this is kind of the fundamental unit of doing stuff. Our mind works in units of attention and in order to be mindful, in order to have some kind of presence, we have to have our attention with us. And so I think what companies realize is that precious attention can actually be sliced quite a lot, and each of those slices can be monetized. And so we’ve ended up in this crazy race where primarily social media companies, but everyone, in order to make you do something, they must have your attention, so the attention economies this battle for your attention.
A lot of times, that shows up as free products that we start using and then they start to find ways to hijack our attention, to get our attention at times when we don’t intend to give it, and then directing that in one direction or another. Most of the time they do that by advertising. They sell your attention to a third party advertiser. Already, you can see that creates a misalignment between the product you’re using and your goals because the product is trying to get your attention to give it to somebody else, and that doesn’t really align because it doesn’t really know what it is that you are here to do.
And so constantly there’s this battle, and we all see this in the feeds that we have. There’s always something that’s trying to get us to click, trying to get us to watch this video, and you get this crazy race. That race actually… And of course all the companies end up having to compete against each other in this race. When one of them thinks of an innovation, like TikTok comes along with ultra short videos, so now everyone else from YouTube and Instagram and everybody has to say, “Okay,” you go back to the boardrooms and the strategy sessions and everyone’s like, “Oh my God, what do we do? How do we make a really short video as well? We’ve got to ship that ASAP to compete.”
Because what they find is they’re now losing market share to the innovator. And so you end up in this race for harmful innovations that hijack attention. Then there are these secondary effects. The first thing is just attention and inability to make good choices, taking us away from our real intentions. But after that, there are all these other effects effects on journalism. What happens to journalism when you’ve got all this content that’s proliferating and getting amplified on social media based on what’s most resonant, what gets the most clicks? Suddenly, you’ve got this more like click-baity, hyperbolic, divisive, angry, emotive content.
That’s the stuff that gets the most clicks. Then that also causes division. Suddenly people find political campaigns. They sort of do these experiments, AB testing. What they find is that the headlines that cause more division that are more outrageous, that paint the other side as bad, that generate this kind of outrage, that gets the most clicks. That gets you people signing up for your mailing list that gets you votes. And so political campaigns start to do this, and then the actual electorate, the citizens start to become divided. That’s one example.
Another example is this idea that let’s say just a simple example of the like button. If there’s so much content, how does a platform know what to show you? Well, they kind of want to say, what are the things that you’ve liked? What are the things that you’ve engaged with? They look at what you’ve clicked on, what you’ve liked, what you’ve shared, and they give more and more weight to those things. They start to show you more and more of those items. Then what happens over time is that your feed starts to get narrower and narrower. It gets more populated by those specific types of content. Again, they’re going to tend to be more outrageous, more emotionally kind of resonant.
They’re not the kind of thoughtful, balanced… Those tend to not get as many clicks, and so you end up actually not only with this sort of outrage and division, but also this narrower reality. Everyone gets a customized personalized reality, which sounds good initially, right? You’re like, “Oh, that’s kind of cool. I’m going to get just the content I care about.” But what that means is you get divided into these echo chambers, you get a narrow view of what reality is. We can see this. If you go to a friend and just swap feeds, you’ll see how customized it is, especially if the two of you have different political views, like very different political views. You’ll see very different content and very different reality.
As more people now, especially younger people, are using social media to get their information, we start to have some pretty big problems where there’s this division of reality. And so people have a harder time coming together, which is kind of the basic unit that we need to solve problems. We have to see each other as decent humans who might disagree. We have to be able to articulate each other’s viewpoints and say, “Oh yeah, Thomas, you believe these things. I see that point. I don’t agree with this other point. Let’s talk about it.” We get further and further from that world. We get further and further from representing information in a nuanced balanced way that represents multiple viewpoints.
And instead, when you do AB testing on advertisements or on messaging, what you find is very narrow, simple memes, kind of mimetic messaging, is what happens. You can see this is how the world works now, but that removes all of the nuance. In the end, when you add it all up, you end up in a society that is more divided, less able to make sense of the world, less able to see each other and have meaningful conversations, more distracted, more divided. This is also not to mention these kinds of influencer culture and the effect on children.
What happens when they watch influencers and a certain kind of performative behavior? Everything has to be over the top. Children are very smart. They intuitively pick up this kind of currency of likes and shares and comments, and they know to look for that. They look around and see who else is getting more likes and more comments and more shares. What type of content is that? “Oh, I should do more of that.” They’re going to pick it up. Just like we figure out how gravity works as we are growing up, we also figure out this kind of social physics. And so that has a profound impact on how children grow up, how their minds are conditioned, what they prioritize, culture.
Suddenly you end up in this world where culture has changed and we’re focused in large part on the wrong things, too distracted to focus on the right things. Meanwhile, the climate crisis, the fact that we have all of the growth, the economic growth that everyone is so proud of most of the time, is driven by billions of air conditioners, billions of cars, billions of livestock. They all release CO2. And so when you add things like that and you add all that up and you put gigatons or billions and billions of tons of CO2 in the atmosphere, now we have a big problem and that stuff doesn’t come down quickly.
So while we’re looking at the wrong things, this kind of thing is happening, and then we get fires and floods, and ecological damage and we’re not able to respond. We’re not able to coordinate to respond because we’re allowing these other factors to break us down. I just want people to see the full breadth of these problems and how our technology, which often ends up between our brains and the outside world, it’s sort of mediating. When it mediates badly, it keeps us from solving important problems.
Thomas: Absolutely. Yeah. Yeah, no, that’s beautiful. That was a beautiful arc, how you led us into deeper understanding. Coming from exploring trauma for a long time, what I hear when I listen to you, and you tell me how you respond to that, but what I hear is that… How I look at it also is that in my regulated state, so when I’m in a regulated present state as maybe we are now right here, I have much more nuanced use of technology. I am much more aware of information that might be off that is not resonant. I’m in a place of choice.
But when we are less balanced and trauma creates either hyper activation or distancing or a numbness, we are actually not regulated as we call it, in the inner work. Our nervous system can’t regulate itself well… So it’s often in a hyper stressed state. Then from that state, my world is actually already more narrow, or I’m experiencing the world is more fragmented, so the way how I will use technology will be defined by those factors most probably. When you look at this attention economy and the conscious choice of a user or that the choices are being play a little bit on the internal dysregulation, and they found a way how to play on the internal dysregulation to catch the attention now. I’m wondering what you’re saying to them.
Randima: Yeah, I saw this is actually a big piece I didn’t go into, but I think we should talk about a little bit, is the addiction piece. A lot of times people get stuck on the addiction piece and they don’t see the full chain of events. That’s what I wanted to walk through. In some ways, you can think of the addiction piece like as if you jack our brains into these systems, the addiction is like how deep is the jack into your brain? A lot of times these technologies, they’ve learned very well how to hijack kind of the dopamine hits that we get that are meant to reward us for doing certain life extending behaviors or reproductive behaviors, just sort of like survival mechanisms.
But now we’ve hijacked those mechanisms and instead we’re just getting stuck with dopamine hits that are actually taking us nowhere. I think this shows up very, very poignantly with children. A lot of people who are listening with children will understand that it’s so difficult to take a device away from a child because they’re now getting this hypernormal stimulus, hyper normal meaning beyond what is normal, beyond what is healthy, beyond what the body has evolved to do. Stimulus. These things, like everything is very fast-paced, very… Even the screens, just for very young children, the brightness of the screen already is hypernormal.
If you take a two-year-old or a one-year-old, if you put a TV on or you turn on a phone, the brightness of that screen is brighter than anything around them and so their eyes immediately will go to that in an indoor environment. And so you’ve got this situation where there’s all these hypernormal stimulus. Not to mention, now every cartoon, every YouTube video, every game, every app, they’re all competing for this. We get dysregulated. Once our receptors get trained for that level of dopamine, we don’t do very well when we get normal levels of dopamine. So you find these kinds of horrific situations where children, if you take the phone away, they’ll tantrum. Sometimes they’re even much worse outcomes that happen because their brain is rewired. I think that’s a big piece.
The other piece is this idea of giving people what they want. Companies will often do this and say, “Hey, because people clicked on it, that’s what they want,” when actually it’s what they can’t help but do. They can’t help themselves because if you live on a feed, if you see a sort of sensational image that sort of piques your interest, you’re going to click on that. But that doesn’t mean that’s what you wanted to do. If a big flashing thing comes, your eye will go to that. It doesn’t mean you wanted to look at that. One of the analogies we make is if you’re driving on the highway and you see a car crash, and an AI is kind of observing you and they say, “Oh, wow, look, humans seem to love car crashes. They always look at car crashes. Let’s make a world with car crashes everywhere.”
Obviously, that’s not what we want. We look at car crashes because we know that that’s kind of related to our survival. So we tend to be very kind of on alert there, and we look at that, but it doesn’t mean that’s the world we want. The analogy there is that our feeds are often full of metaphorical car crashes, and in the end it makes us all click on things that we don’t intend to, and it steals our attention and it steals our time in ways that are not aligned with our original intention. The question is, what would be a much wiser world would be a world where the tools that we have or the places where we spend time ask us what our intention is instead of inferring that based on what we click on.
Because those are often almost always not aligned. The problem is, if they did that and they really tried to guide us to what we intend, our time on site would be much shorter. That doesn’t monetize very well. That’s kind of the reality, that no matter what people say, you have to look at the business model behind whatever product, whatever product you’re using. The business model reveals the actual incentive structure. This is why social media companies just can’t be that helpful for you because they really need you to be on site as opposed to, for example, learning the thing you need and then going away and living your life somewhere else like off-screen.
There’s a conflict of interest there, and that’s why they have a really hard time actually being genuinely useful to any large meaningful measure.
Thomas: You said something very important. You said just because of people clicking and something having many clicks, that’s not necessarily what we want, but that’s actually what pulls our attention maybe because of much more fundamental basic needs of survival and whatever gets triggered by this sensational newsfeed, sensational images. Last year, we had a track on trauma-informed journalism, and we made in a way to point that actually sensational news and social media feeds are detrimental to public health. We are on the way to… I mean, there’s a lot of data that we can collect around that that’s actually not really healthy. It’s creates more polarization, as you said, many things, more stress in the society and all kinds of things.
But there is this argument. There is an economy system behind that. My next question would be, how can we change from where we are? But we cannot just have a good idea because obviously we are where we are out of a reason. There’s a reason why we are here. How can we actually transform into a world that takes in account the economy model that drives a lot of that motivation for the businesses? And then there is this maybe more dysregulated part of me that can’t fully interact with the technology in a way that’s good for me. Maybe you can walk us a bit through what are ways to come into a different world that doesn’t ban technology that we say, “Oh, [inaudible].” How do we do that? Do we get to a more healthy version of-
Randima: This is obviously very, very difficult. There’s some answers within the current system, and then there’s questions about the current system. Is that even workable? Is that something that’s going to keep functioning? Let’s start with the easy part, which is within the current system, because the system centers on price, everything is built around supply and demand and prices in the middle, and that’s kind of how the economy works. That’s the thing we track, is the dollars, the revenue in whatever currency.
And so in interactions, basically interventions, that affect price are the ones that are going to be successful. There are ways to do that. For example, The Social Dilemma, when the film came out, that generated a lot of pressure where people understood, hey, these were kind of harmful. And so it creates some negative pressure to use these products, but still, that alone doesn’t affect the price that much because not that many people leave platforms. What it does is it creates awareness so that legislators can create new laws, new penalties for doing the wrong thing.
If you harm children, there’s a penalty for that, and then there can be litigation and lawsuits. There, now you can see the connection to the dollar sign. Eventually, either some kind of government agency or litigation, like a law firm, will sue you, and then the costs of the harms that you’ve done end up back on your shoulders. One of the interesting things that happens with technology is that it advances much faster than our institutions can keep up. A lot of times, I mean all the time, the technology that comes out has effects that we aren’t yet… First of all, often we don’t fully understand it. This is happening with artificial intelligence also. We don’t fully understand exactly how it’s all going to play out, what the harms are, what the measurement of those harms is and all that.
When we can measure and then we can put that back on the company’s balance sheet and make them pay for it, then it starts to affect price. Again, you’re coming back to the dollar sign. Same kinds of things that you can do, you can have ways of inspiring how to build more humane technology. That’s one way of subsidizing the R&D cost of doing the right thing, so you can sort of inspire people, teach people how to do the right thing. There are insiders who can generate pressure. When people whistle blow, when something really bad is happening and a whistleblower speaks out, that’s another way of bringing all those costs back into the company because documents are leaked that show that companies knew there was harm that now needs to be accounted for.
These are all examples. Basically, there’s a set of moves that you can do over there, and that’s all within the current system. Changing systems is a very difficult thing. I think fundamentally the idea that we can keep growing forever, that we can keep extracting and it’s all going to be okay, it just isn’t true. You can sort of look around and it just doesn’t make sense. Whatever you’re extracting from has to regenerate at a rate that’s commensurate to the rate of extraction. If you’re extracting very, very, very quickly… For example, if you take out oil from the earth that took millions of years to form and you expend all of it in the course of just a few hundred years or even less than a few hundred years, that’s a big problem. That’s just not sustainable.
The same thing with… So there’s physical resources and then there’s also our mind. You look at our own minds and if we start to slice up our attention faster than it can regenerate, faster than we can heal, we can sleep, faster than we can make sense of the world, we start to break all of these faculties and they don’t regenerate quickly the conditioning of the mind. This is an interesting topic related kind of to mindfulness and sort of self-improvement. I think we all know it takes a lot of effort and discipline to do the right thing, to do mental training, to do physical training, to do exercise, all of these things.
They’re not easy, to eat healthily or whatever it is that we think is important, but it’s very easy to unlearn those and to do damage, especially now. Again, think about children and their growth because they’re much more malleable. Once they’ve learned the wrong habits, teaching them the right ones is really hard. Once we’ve developed… We may work very hard to develop deeper, longer attention, like sustained attention, more mindfulness, but when you think about it, and the phone is around us for eight hours a day or more doing its training process, and then maybe you would sit and meditate for 15 minutes, or 30 minutes, or maybe 45 minutes, which many deep meditators would do more in that direction, but even they would have a really hard time if they used the phone all the time.
That’s kind of undoing the training process. Then with children, I mean that happens much faster. I think these are the kinds of things we have to be aware of as we think about, okay, how can we transition our economic systems to something healthier? This is also not mentioning inequality. The economic system we have right now, one way I like to say it is that capitalism is very sensitive to capital. It’s brought us a lot of benefits over the years, a lot of advances. This kind of competition and extraction has brought a lot of advances in medicine and food, clothing, shelter, like all of these things, technologies, we have all these wonderful technologies to learn things, information, and now artificial intelligence is an example.
But again, all of these do come at a cost and if we allow these things to run in unsustainable ways, I think that is a big problem. That’s where we are. If we think about how we want to change the system, I think the AI situation is one, is maybe a good motivation for solving and addressing some of that because I don’t think it’s going to sustain as we automate millions and millions of jobs. Inequality is going to get even wider. Capitalism, as I was saying, tends to distribute capital unequally. It keeps doing that repeatedly, repeatedly, repeatedly.
There’s this concentration of wealth that happens and now we’re going to have an even more profound acceleration of that whole process. I don’t think that works out well for a large number of people. We have to pay a lot of attention to that. The best news I can say is that maybe this is an opportunity to make some changes.
Thomas: How do you see, now that you mentioned it already, so there was The Social Dilemma, the film that addressed the kind of social media and the tension economy, and now we are at a different state already. As you said, it develops very fast. Now there is AI, and I don’t know how many people know how it really works, but it’s like how all the processes really work and where it’s going to go. That’s why we are in a moment that is unpredictable, and maybe you can speak a little bit to the moment. What are the upsides, the downsides? What do we need to take care of?
Randima: Yes, sure. I think the best thing to hold in mind for anybody listening is technology has always been a great accelerator. Technology is humanity’s greatest accelerator. You could make a good case for that. AI is sort of like the pinnacle of acceleration. It’s accelerating things very quickly. This latest generation of artificial intelligence is extremely capable. Generative AI can produce text, images, audio, video, all at very high quality, and it’s improving very quickly, much faster than Moore’s law, faster than chip technology, than semiconductor technology was improving because it’s compounded. So it’s got semiconductor technology driving, but also algorithmic technology. Just the way we run things, the way we can collect data, the way we can connect computers together in clusters, all of that is powering it. It’s advancing extremely quickly.
And so AI ends up being the biggest accelerator and it’s going to accelerate everything that we have, the way we do things now. If you think about any company out there, they’ve thought really carefully, they kind of went back and said, “Okay, let’s do our strategy. Let’s look at our strengths and weaknesses,” and they’ve made their investments. They’ve made their factories, and their processes, and their people, and they’ve sort of figured out the way they do things. What everyone’s looking at now is, okay, the way we do things is good, but how do we make it better? How can AI accelerate that process? Everyone is asking this question.
If we have already a predominantly extractive system, we are turning that extraction on overdrive. It’s going to be amplified many times over reducing friction everywhere we can. That’s one aspect. There’s another aspect… I think I should say, that is the dominant aspect. I think that’s going to sort of drive our future. Part of that is this process of automation. It’s kind of common sense. If you’re a business owner, everyone’s always looking for new technologies to make business run better, do things faster and do things more efficiently, and so portions of jobs are going to start to be automated, especially the more cognitive jobs, so bachelors and master’s degree in particular. Those types of jobs. That type of thinking.
AI is pretty good at those things. Physical labor is harder. It’s going to come a little bit later, but the robots are being trained too. This is true. But first, it’s going to be these cognitive jobs. When a significant portion of a cognitive job has been automated and only a little bit is left for the human to do, then business owners will start to be like, “Hey, wait a minute. I don’t need four humans. I just need one human here,” because there’s always this extra cost of coordination and humans, we’re messy. We get sick. We have priorities. We need vacations. We need healthcare. And machines don’t need any of those things.
And so you’re going to find this kind of… It’s just going to be good business and people are going to make the business choices. This is the same reason why trickle-down economics doesn’t work, is that that’s just not how human psychology works. Whenever there’s a chance to simplify, and automate, and reduce cost, business owners are going to do that because that’s what they’re rewarded to do. This is sort of the trend that we can expect. That’s the dominant trend. We should also be fair and say that at the same time, there’s no question that artificial intelligence is highly likely to advance medical research, to advance climate. Drawing down carbon is one of the things that might be possible.
New sources of energy, so battery technology or just more sustainable, high yield energy sources, medicine, all of these things, they’re just great things that can happen. There’s no question about that, but we always emphasize that there’s an order to these things. You can’t get to all the good things if society breaks down before that. It’s sort of like a doorway that you have to get through, and the doorway involves good sense-making, being able to make sense of the world, good choice-making, being able to make wise choices and not losing ourselves in the process. All these questions about jobs and people need to have some kind of security. We need to have a system that’s actually aligned in a way that doesn’t create even wilder inequality than we have now. Ideally, we should reduce it. And then we can get to that bright future.
They have to be done in the right order. And instead of course, just like the social media companies before them, every company now is caught in this race to figure out how they can deploy artificial intelligence and maximize revenue as fast as possible using the latest round of technology without stopping to assess the harms, mitigate, make the kind of investments that would be needed to prevent unpredictable harms from happening. We like to say social media was our first contact with artificial intelligence, and this round of AI is the second contact. Just like the first time, we couldn’t see all of these downstream effects.
We saw this attention thing, but we didn’t see the downstream effects very clearly. Same thing now, except this time I think we can actually see some of the ways things can go wrong when you have widely distributed, very powerful technology that’s no longer going to be centralized in a way that can be regulated. Not that we were doing that well, but it’s even harder when people can run these technologies on their phones and anyone can make fake videos, fake audios, fake images, the text to match, write fake research papers. All this stuff is really easy to do and there will be consequences of them.
Thomas: Yeah, absolutely. Absolutely. Thank you for this. This was a great overview. Also for everybody, a reminder that we are also contributing to that and we have an effect in the world. I think that’s important to highlight everybody’s impact also. If billions of people notice their impact, so then there is also a possibility to make a constructive impact.
Randima: I would just add, I think along those lines, one of the most important things if you are listening right now and thinking, “Oh wow, this is a lot to take in,” I’d say a few things. One is to always analyze when you see a situation. Think about the system behind it and the incentives. The incentives reveal much more about behavior than words, or advertising campaigns. If you really want to know what someone is doing and where their heart is really aligned, look at their incentive structures. I think that’s really important.
The second thing is at a personal level, being very clear about what is wise action for you? What is thriving? What does happiness really mean for you in your life? Talk about it with people you care about, with your friends, with your loved ones. Talk about it with teachers who you respect. But figure that out and then operate from that North Star instead of letting the menus of anything, whether it’s Netflix or social media, or images, don’t let other people’s menus drive your life. I think that’s kind of a big lesson. In order to do that, you have to be clear about what you want for your life so that you can be discerning. I think that’s a really worthwhile use of time and energy.
Thomas: I love that. I love that. I love that. Coming back to you in Sri Lanka, being inspired by Buddhism through your mom, when we look at how is mindfulness important also maybe in your life, what does it help you in your life and how do you think mindfulness practices, presence, all the benefits of mindfulness practices, can actually help us navigate in this time? What’s the benefit for us in relation to the conversation that we’re in right now?
Randima: I think in order to see clearly, you have to have some sense of, just from a theoretical sense, what is important? What is happiness? Some kind of framework is helpful, but then to implement, to actually implement that framework, you need to have awareness. You need to be aware of your mind and how your mind works and the feelings you’re having. This is why I think mindfulness is a very helpful training to be aware of what’s happening in your body, what’s happening with your thoughts, what’s happening with your emotions, and this sense of what’s pleasant, unpleasant in your life. Just being able to track all of that.
And then from there, you’re able to create some spaces where it’s some moments of space where you can say, “Is this the thing I want to do? Is this helpful? Is this helping me right now? Is this skillful?” And because you’ve thought ahead of time about what that might be, what does a skillful life look like, if you’ve thought about that ahead of time and you have the tool of mindfulness to kind of help you create the pauses, if you put those two things together, I think you have a pretty good chance of making better choices.
That said, it is extremely difficult to defeat the algorithmic… The supercomputer pointed at the brain thing, the amount of analysis and testing that companies have done on all humans, we’re all very vulnerable. We have a certain human nature. We have cognitive biases. Those are really easy to hijack.I think we have to be very aware that even advanced mindfulness training is really not going to do that much for you if you are putting yourself in the wrong battle all the time.
One of the things that’s important is a kind of renunciation, a kind of separation from things that just you’ve decided are not skillful for you. Don’t try to win that. Win it by not being in the fight. I think that is the best way to kind of navigate and then use the mindfulness piece to look at in a little bit of a simpler situation where you can make wiser choices.
Thomas: No, that’s beautiful. Yeah, so mindfulness actually as an amplifier to be able to be aware of what’s good for us and what’s not good for us, and that it strengthens our choices, but then we also need to stick to the choices that we made about our life. And so in order to follow that through, otherwise we get sucked back in into this in the same game. Yeah, that’s amazing. Thank you so much. If there’s anything that you think we didn’t talk about, then please let us know or give us maybe a nugget to take away from this conversation. Otherwise, I think it was a great arc, and I think there was a lot in there to digest for us.
Randima: Yeah, this was great. I tried to summarize I think earlier, so I don’t have much more to add. There is an infinite amount of content here, but I think we touched on some topics that I hope will be helpful to everyone who’s listening.
Thomas: I think so too. It definitely was inspiring for me and I’m sure for everybody who’s listening right now. Thank you very much for this time.
Randima: Thank you, Thomas.