Thomas Hubl: Hello and welcome to everybody. This is Point of Relation. My name is Thomas Hubl, this is my podcast, and I’m sitting here, again, with Angel Acosta. Angel, everyone, welcome. We had an amazing conversation, so everybody, many might know our first part already to this conversation. So this is the second part to our AI exploration. And I loved last time how we were riffing off each other in the conversation and it was a really generative space and I felt very inspired afterwards, so thank you for that. And let’s see. I think we start best with what’s most fresh. So what’s the most fresh for you right now in the conversation around AI, AI development, how as humans we experience the collective vibe around AI. So, what’s new for you?
Dr. Angel Acosta: Yeah, yeah. No, thank you so much, brother, and always a pleasure to be in dialogue and giving myself an opportunity to deepen the relationship with you. I’ll take a moment just to kind of share some of the questions and thoughts that came up during our last conversation, like how addressing the biases inherent in AI and its technologies can create a leap for consciousness, if we do a really good job at how we train them to limit and reduce bias is there a possibility that AI can help us with expanding our consciousness?
Also, the fear that occurs when new inventions and innovations arise in our bodies and how that fear is connected sometimes with our own trauma and how we deal with uncertainty. So, how does healing trauma support us in adapting to this evolutionary moment with artificial technology? And so yeah, I think we tapped into a lot around this exponential force.
And so for me, since the last time we talked, I’ve done a lot of experimentation, so I’ve played a lot with AI and I’ve built some AI. I want to tell you about that. I’ve built some AI within the AI platform, so it’s not like I built AI from scratch, but rather I leveraged the AI infrastructure of, let’s say, ChatGPT to build custom GPTs, which we can talk about.
So I’ve been experimenting a lot with AI, I’ve been holding space and community to process the fear around AI, and also holding space in the community to play with the AI together. So I’ve been having some really hands-on moments. I’ve been on some conversations thinking about contemplation, healing, and artificial intelligence.
I just got back from Japan. I was in Tokyo and Kyoto speaking at a conference on contemplative education in the age of AI, which was really powerful. I got a chance to give the closing remarks to that event. And it was just really powerful to think about AI from the perspective of Japanese society, a society that has done a really good job at integrating technology, yet at the same time has some serious issues when it comes to isolation, loneliness, suicide, fertility rate.
So, it’s a very rich conversation around the limits and the potential of not just AI, but also robotics. And what do we do when, as the question you first pose, what I’ve been up to, just been just observing the field and exponential growth. And so if you look closely, the field of robotics is getting very sophisticated, and to the point where you can have a robot, a humanoid, and then integrated with ChatGPT. So then you can have a full-blown dialogue with not just ChatGPT and its text-based interface, but with an actual physical being that can move and respond to you, which is different. This is a different thing. And the user cases for this, in Japan they’re using robots and what they call humanoids that can talk and respond to you to care for the elderly and support the elderly.
So, I’ve been up to, that expression, I’ve been up to no good. I’ve been up to good and no good in terms of learning about the limits of some of this stuff. And the last thing I’ll say, I think that I really want to kind of bring to you, brother, so the CEO of Nvidia, Jensen Huang, gave his yearly keynote where he lays out the updates around Nvidia’s digital and technological infrastructure to support the field. And Nvidia has played a major role in the growth of artificial intelligence.
And he was just giving an example, and I want you to sit with this. So for example, ChatGPT, to build and train ChatGPT, the large language model, it required, and I’m new to this, so even though I’m going to say this, please don’t take this as I’m some kind of expert, I’m literally just repeating what I heard.
But to train and to build ChatGPT, OpenAI’s ChatGPT, you need 1.8 trillion parameters. And by parameters, you can define parameters as data sets, right? Let’s say the German language or the French language or the entire data in the internet, just imagine feeding ChatGPT or the large language model, all of that data.
So there’s the parameters, and then there’s the other piece. I’m still understanding this, which is called the tokens or token points. So, you need 1.8 trillion parameters and several trillion tokens. And that relationship between parameters and tokens is, at the end, the full grid of data that GPT uses to train itself and to respond to us.
So, that expansive kind of numerical outline, what that means is when you’re working with an artificial intelligence and you say to ChatGPT, “Could you please help me write an email?” Or, “Could you design X picture?” When the machine gets your input and it goes into the dataset to then spit out the response that’s most in alignment statistically with what you ask, what it’s doing is basically the 1.8 trillion parameters times the several trillion tokens is the equivalent of 30 to 50 billion, quadrillion folding operations per second. Per second.
I only want to say that to say I don’t know what I just said. And two, to show you the magnitude of computing power. In the last 10 to 15 years, we’ve seen computing power grow 1,000%. It used to be, in the early 20th century with Moore’s Law, every few years computing power would double, then it became 100. Now, the way that artificial intelligence and computational power is growing is unprecedented, brother, unprecedented.
So my question that I’m sitting with here today, is if we’re building technology that can engage in that kind of computational power to create an exponential leap in processing speed, then what can we do? What are we doing? Is it possible to create processes that allow our physical, intellectual, spiritual infrastructure as humans to be able to hold more, to heal more, to respond more? ‘Cause I see a gross imbalance there, and I think your work could be really insightful and powerful here, is how do we adapt to this evolutionary leap in form of technological computing power, but also in terms of soft skills, spiritual awareness and awakeness? Does our compassion grow exponentially, too? So, I leave you with that, with what I’ve been kind of thinking about lately to initiate this dialogue.
Thomas: First of all, I want to say that I’m so happy to sit here with you and I love you and it’s great to dive back into this conversation. I feel a strong resonance of your excitement about what you just explored and how it’s sparks you and your creativity. That’s amazing. And it’s very interesting, because as you were talking about all these numbers, I just felt how amazing it is that we can walk as an animation of the biosphere and combine millions and millions or billions of years of evolution as this conversation.
So the data that is having this conversation is amazingly rich. When you imagine all the lives that lead up to your life and all the lives that lead up to my life, all the life means from the first cell up to us sitting here, and by a living animated planet, speaking to the living animated planet, inspired by a tremendous amount of spirit. Having these conversations, all the complexity of our nervous system, being able to encode for all these different layers, like the physical evolution, our emotional, our social, our intellectual, our spiritual evolution, that is sitting here having this conversation.
And so I think it’s important to say that because otherwise we compare, and that’s why I love what you said, “How can we keep up with it?” Is that we realize that we are one supercomputer, not separate laptops. Because the separate laptop’s incredibly slow, but the supercomputer that animates separate laptops is incredibly fast. And so that has nothing to do with any opposition to the thinking of AI, that has something to do with what is processing AI.
And so our processor is slowed down by the separation and most of it we created by ourselves. So we experienced through trauma or we inflicted through trauma on ourselves as humans, as humanity. So all the ethical transgressions that we are also partly a result of is slowing down our processor capacity.
So there is something in the disorganization of the data flow in the supercomputer. And maybe that very separation, when you listen to the collective dialogue, let’s say you’re a therapist, and you listen and humanity is your client and you just lean back and you say, “Okay, let’s see what comes up in your experience.” And when you listen to us projecting all our stuff onto AI, I thought it’s pretty interesting what we are coming up with.
So if you turn the mirror, like a good therapist would do and say, “Instead of blaming your wife, your husband, your coworker, whatever, let’s look at what’s happening inside, what’s really going on for you when you say all this.” And which doesn’t mean that what we say is not true or might not be true too, but there is something in us. And so we will see what, first of all, as we said in our last conversation, what a tremendous possibility for self-reflection that is humanity’s self-reflection, because suddenly there is just a little bit the idea of another. It’s not an extraterrestrial, it’s like AI. And that is a great reflection surface, I think.
And then the next level of separation is where is actually the separating line? So, what are we? Because we are obviously downloading something. There are so many people working in different startups around the world that all contribute to ChatGPT on the long run because somebody invented the internet, somebody invented it. Okay, so now it’s AI, but what are we all downloading? Where does that come from?
And I think that’s amazing, just us having this dialogue with everybody who’s listening to it. There is so much data that we often forget when we see ourselves as a separate small body and human, and we are meeting other humans like separate bodies and humans, versus this vast, as you mentioned, the mystical traditions speak about the vast field of interconnectedness, interbeing, interdependence. So what happened to our data flow? And our is not our versus AI, what happened to the data flow?
And I think another conversation to be had is, and we started it last time a bit, is what is actually AI learning from? It’s great that all the information, I mean, I don’t know if it’s great because there are some serious issues about the data that has been harvested. So that’s one point apart. But even if you say, “Okay, the data that AI learned from, what’s the makeup? What’s the architecture of that data? And what is the data that’s not there?
Just last time we talked about the bias, racial bias of AI. That’s a serious issue and that’s a serious proof or the unrecognized trauma that is part of how which data AI is using. And if we say that that’s not the only trauma that we are not aware of, how we pass it on. So what happens intergenerationally obviously seems to happen with technology, too. So the intergenerational trauma transmission happened there. That’s just one very important example.
And so there are so many questions coming up. I think just when we open, with your opening remark, that I think are remarkable to move without trying to say, “Okay, we know how it is.” Well, it’s just these questions I find very interesting. And I think what you said is true, so melting the disorganization that creates a lot of separation in humanity and the serious issue to collaborate on the major questions that we have globally. I think seeing what happens if we upgrade collective coherence, and if we defragment the supercomputer, then I think it’s interesting how we look at all these questions and different levels of who we can become. So yeah, so this is my first response to-
Angel: Yeah, yeah, it’s a juicy response, brother. There’s so much here. And I just want to take a moment to slow down because for folks listening, especially folks who are listening who are in the AI, artificial intelligence industry or space, this is a whole different way to talk about the work. So often, the very language that’s used to talk about AI is very transactional, it’s very cognitive, it’s very focused on the material aspects of inputs, outputs, the logistical processes that are required to train these models. And we’re having a very different conversation here, which is a little bit more expansive, so we’re not tethered to the paradigm of talking about the technology within the lexicon of the technology. I just wanted to be explicit there.
And lately, my role at the Garrison Institute as the director of the fellowship, I’ve been doing a lot of thinking there. The institute there thinks a lot about ancient wisdom traditions and scientific inquiry. And I’ve been talking to the leadership there about, “What’s the institute’s stance on artificial intelligence? What’s the stance?” And having some conversations around that.
So I’m in the process of working on a series of conversations which I’d love to invite you to. It’s relevant to our conversation here. You’re going to love this. The title came to me and it’s called The Artificial Intelligence and our Spiritual Imagination, AI and our Spiritual Imagination. Just to think about it, just think about having a series of conversations, talking specifically about AI, but with people like you, people like Bayo, and just kind of seeing what comes up.
So your response reminds me of that. What does it mean to think about exponential technologies in relation to our spiritual imaginations? How are we growing into this moment in our species, both building the hardware and software that is growing very fast, but also building the other aspects of who and what we are as a species, holding so much and knowing so much?
Yeah, brother. So I’m just excited about thinking about it from this perspective, and also knowing that times are hard. There’s a lot of conflicts, there’s a lot going on that make it really hard to have unity, to have coherence, as you call it. So I want to be expansive and also be grounded in the realities of the limitation and the limitations of some of this work and this conversation.
The last thing I’ll tell you that I think when you said that really struck me, is the data that these models are trained in. And in a way it may be asked, what’s the data that I’m trained in? And by that I mean not just the education that I’ve gone through, the values that I’ve learned in my home, my community, my country, but the subconscious, and as you call it sometimes, the frozen material, the frozen traumatic energy that is part of the data set that informs how I see the world and my bias.
So this also might be an opportunity, as we’re thinking about training AI models, what analogies, what comparisons can we make with how we train AI models to be less biased, to be more inclusive, more whatever word you want to use? And how do we think about our own, the way we look at ourselves and in our, “Training,” quote, unquote, or our development. So there’s some interesting stuff there.
Thomas: Absolutely. I think that’s the great part. It’s like it’s a mirror and it will mirror back to us all the incongruences that we carry inside. So I think it’s a great possibility to have some kind of consciousness jump, and a learning and accelerated learning for us, because it’ll mirror back some aspects that we need to change that changes our consciousness. The question is only the real unconscious biases we don’t even ask about.
So we ask about the ones that are already close to the surface, so that you see already the symptoms, and you ask yourself, “What’s under the surface? There is something.” So if you say there is something, you need to see it already. You don’t see the stuff that is deep down in the ocean. It’s dark, you can’t see, there is no visibility. But when you come closer, so the visibility grows and you see stuff.
So we need to always keep in mind and be humble enough that some of the unconscious stuff we don’t even ask about because we don’t have a question. And so that’s interesting and why humility I think is so important, that I don’t know what I don’t see about you. I know maybe the symptoms that arise between us and I know what I see, but I don’t know what I don’t see.
And so what’s inverse in our perception doesn’t show up anywhere. The subject is not aware of the inverse information, it’s unconscious, and that makes it really interesting. So what are the methodologies to surface those, if at all? Well, what is that? And I think that’s an interesting question.
And then as you said, I think there is a different form of intergenerational trauma transmission. So as if we train large language models, we train those through certain distortions, but we might think that’s normal information. So that’s the normalized collective trauma field that creates also a bit of a distorted version of reality.
And not only that, as you see in some international, I don’t know, warfare, data warfare, how does data warfare work? It increases separation. The invisible warfare that’s going on on the planet is working precisely to increase separation and conflict in countries, because the more there is conflict, the more it’s a deficit for that country. Because every coherence is actually success. Every coherence that really works is more freedom, is more openness, is more inspiration, is faster progress.
And so we are living in a time, I think, where we need to be aware of both, that there are unconscious intentions that are non-beneficial intentions, that are increasing separation. And there is an increase of data flow through technology. You said, 1,000 times faster means our nervous system without genetic update needs 1,000 times faster processing capacity to live in the world that is 1,000 times faster.
And so we are going, actually, within our lifetime, through an amazing update while we are not having a genetic update, like a conception in between. And so our nervous system is actually doing an amazing job. To be part of that fast-paced world, 1,000 X world, we need to constantly deal with what that means, that we need to channel that information. And I think often, when that hits trauma, it massively increases the polarization in our world because we can’t process it.
And so we inevitably increase at the moment the polarization, that is when the fast data hits the unconscious frozen areas in us and in our cultural bodies. And so we urgently need a way to deal with this. This is not just, “Oh, there’s more polarization.” The more, as you said, 10,000 X will increase the fragmentation even more. And we’re feeling it now already. And then you have climate change and then you have… So there are many stress factors that are…
And that’s why I think without a collective architecture to induce collective healing, it’s going to be very hard to keep up to speed. I think we need somehow a stronger process to collectively heal because that liberates data flow, that unifies, that allows us to really ground AI through our nervous system as the next initiation.
And I think that will connect ethics and AI, there’s not a gap. Now there’s a gap, we see a gap, and we need to close that gap, or that would be very beneficial. And I think that that’s liberating the separation and the data distortion and the collective trauma field, however we want to call it, and liberating that energy will increase our capacity to channel data flow, which means we are literally grounding technology through our body in the planet or as the planet. And I think that will take care of many, many issues.
Angel: Yeah, brother. Yeah, I got to kind of slow down myself and also to allow listeners just to sit with the magnitude of what you just said, and I really appreciate what we’re doing here. One, as I mentioned before, having a different conversation around AI that accounts for not just the hardware and the software and the commercial, but also the personal, the spiritual, emotional, the transpersonal.
But I want to stay in this image of what you just mentioned, our nervous system, both individual or my individual nervous system, and then our collective nervous system in terms of how it’s dealing and adapting to this incredible shift in change in the environment by way of AI and other emerging technologies. Just want to slow down a little just to feel into that and to be compassionate towards that.
And yeah, part of having so much data is that you get overwhelmed. So I’m overwhelmed right now. My system is like, “Ah.” I’m feeling it right now, and it’s kind of like, “Whoa, whoa.” But so really trying to ease back into, I know sometimes you play with the word responsibility. The responsibility is also cultivating your ability to respond.
So, as you share something so expansive that it’s almost like you laid out a big data set and my system needs a little bit of a moment to see how that makes sense for me, especially in a grounded way and how that can be applied in my life and in my community. So I would say that, yeah, man, we do need urgently, we need processes that allow us to update and upgrade our emotional, spiritual capacities to meet the technological flow.
And at the same time, there may be already ancient traditions and practices that allow us to be spacious, that allow us to cultivate a really powerful presence to meet the times. So, maybe there’s not something new that has to be required, but a remembering, a reconnecting, a reestablishing a relationship to land, a reestablishing a relationship to ritual, a reestablishing a relationship with divining tools. Maybe we need new divining tools. How do we see the laptop as a divining tool, as a tool that allows me to connect to you and have this precious conversation?
And the last thing I’ll say that you kind of reminded me of how precious this conversation, this moment is, when you think about, sure, human ingenuity and creating the laptop or creating computers or creating Zoom, incredible, but also think about this capacity to bring all these different elements together, from the minerals that create the microchips, to the glass that make the screen, to the fiber optic cables that build the networks that connect the internet. And the bundling, the bundling of that network and technological interface to facilitate not just this conversation, but to create a whole society that is interconnected. That’s just mind-blowing, man. That’s just like, “What are we doing?” In a good way. In a good way.
So in addition to the challenges and the conflicts, but I’m just really, as much as I’m disappointed in the ways in which that there are all kinds of conflicts and traumatic events happening around the world that are exacerbated by greed, by inequality, by hatred, there’s a paradox here, I think.
So, in addition to what you said around upgrading our capacity to hold more and process more, there’s also a skill, I think, of holding paradox that I think is going to be really important, really being able to hold paradox, that sometimes there’s polarity and two things can be happening at the same time and truth and truthful at the same time in different degrees. How is it that we can develop such a sophisticated technological society, and at the same time, such a barbaric capacity to continue to cause harm?
So there’s something to me around holding paradox, increasing our capacities to hold more. And the word that came up for me a lot as you were talking, brother, was harness, harnessing. We’ve harnessed all this technological power, we’ve harnessed it to create Zoom, to create ChatGPT. So the question then is how do we harness also our intelligence to create that next evolutionary leap spiritually, emotionally, and psychosocially?
Thomas: Yeah, I’m very much in this, when you say silicon becomes silicon chips, so silicon is building silicon chips. So we are this amazing assemblance of life, organizing these different layers of reality as an intelligence that can organize different levels of reality in new ways.
And in a way, we are also seeing that what I call, or not I, what we call the horizontal complexification. So that AI has a tremendous ability of horizontal complexification. And this looks at first like magic because so much more recombination can happen, then we see at the moment expressed in the individual and collective coherence that we as humanity are able to perform or to be.
So, there are data gaps and there’s a lot of friction in the computer. And then we say, “Why is the computer so slow?” And relatively, the computer is, I think, very fast, but we are seeing in the face of AI what consciousness at the moment, our consciousness is not able to do. Because it seems like that information is somewhere not here and we don’t have it, and we are dealing with all kinds of questions, but that as the mystical traditions say that information is non-local or omnipresent, which means every inch of the universe contains the entire information of the universe.
But that’s in my separate sense of self, I feel separate from that. So there is a degree of separation. And so I think the question is amazing that you ask, what’s supportive to generate this leap? And one aspect of the question, I think, is what’s inhibiting, why are we walking with that question?
And I think the inhibition of when something doesn’t work, that’s why in a prayer that I often say is, “I’m grateful for what I see and I’m grateful for what stays hidden because this is thy will and that’s why I’m here.” So, I’m grateful for the reality that I’m aware of, and I’m grateful for the reality that I’m not aware of.
And at first we could say, “Oh, why am I grateful for what I’m not aware of?” But the inhibition is the intelligence that saved us in the past, and now it seems like dysfunctional, but actually it has a function. And I think to raise our capacity to be able to make that function, my friend instead of my enemy, I think that’s an amazing capacity of wisdom.
So how can I make the invisible, not the thing that I don’t see, but I should be seeing it, versus how can I become a friend of making invisible, hiding, disappearing, absenceing? And so that the power of absenceing becomes my friend, and then I’m much more powerful in its, like to release the inhibition that we sometimes see as externalized issues.
And I think some of them have severe side effects. It’s not that it’s romantic. Some of them lead to worse, as we see right now, and some of them lead to enormous suffering or other traumas that are happening right now. And so it’s a serious issue. And still, the process is the same. And that’s why I recently talk a lot about what’s the grammar of the language of separation and what’s the grammar of a language of unification or transformation? And they have a different encoding, and that encoding will determine in which world I’m living. So yeah, I think this is just an initial response to your question about how we generate that leap or the jump.
Angel: Yeah, brother, I think it might be setting us up for a part three.
Thomas: I will be happy to have a part three.
Angel: This might be setting us up for a part three of this dialogue. So, here’s some reflections and some recommendations as we kind of close out. I’m really struck by what you just said, and this is a kind of given, but the difference between data and wisdom. And AI can take a certain set of data, trillions of parameters, trillions of tokens, as I started this conversation, and provide responses to our businesses, to our communities, to our own personal needs. But there’s something to wisdom that is a practice, and that may be part of what’s required right now is a harnessing of all the practices and rituals that we know as a human community to reactivate our capacity to engage in wise decision making in relation to AI.
And then another thing that I think would be really cool at some point for us to do, we can do this virtually and at some point maybe in person, is to take this conversation in community and then facilitate some kind of activating process in the form of maybe a contemplative practice, a meditation, or a series of deepening practices that allow each participant to just settle in their bodies and then interact with AI together in the same space to see how does the readying of our minds, the attuning of our spirits, impact our interaction with AI and how does it impact the output? And then how does that process in its entirety give us some insight into the kind of work that we need to do to adapt to this new world that’s already here?
Thomas: That’s beautiful. So let’s do that. That sounds like a fun-
Angel: [inaudible 00:51:00].
Thomas: That sounds like a fun thing to do. Yes. That’s interesting. That’s an interesting experiment. And then see what happens. That’s great. I love that. And I also love your framing of, I think there’s so much wisdom of all kinds of wisdom traditions around the world, and I think as you said, we are not inventing something new, we are literally remembering very precious wisdom and rituals and healing methodologies that are here already thousands of years.
And of course, they’re always being updated. They’re not getting old. But I think what you said is a deep call to remember and recognize the value that lives in between us or lives amongst us already, and how to reactivate that and make that wisdom part of the resource in our conversation. I think that’s really very, very important.
And the one thing that we didn’t speak about last time, maybe as we need to have the cliffhanger stay a cliffhanger, that for our next part, it’s just finding excuses to come back together, which is great, and I’m enjoying this deeply, is what’s the difference between horizontal complexification and vertical alignment and what’s actually the future that AI has?
And I think that’s an interesting question, because we assume that the increase of data speed and the increase of making AI better and better is its future. And I think we should have, maybe in our third conversation, a deeper exploration of what is actually the higher consciousness future? What’s actually pulling us into vertical innovation? Whereas Otto Scharmer, our mutual friend says, “At the bottom of the you, the future emerges in presence and not tomorrow.”
So what’s actually that pull, that inspiration of the great geniuses and maybe also the collective genius of downloading future information, like subtle information and making it manifest? So participating in the creative process of the universe and what’s the role of AI there? What’s the limitation? What’s the updates? So I think a juicy question to work through.
Angel: Yeah, let’s do it. Let’s do it. I leave with a juicy question, that juicy question, and then that juicy question allows me to ask, is AI an opportunity or a democratizing of the capacity to play with the creative force of the universe? I’m going to just leave with that. Is this opportunity also to play with planetary universal intelligence via these technological interfaces? So, always a pleasure, brother.
Thomas: Yeah, always a pleasure, Angel. It’s so great to be here with you and I walk away inspired and creative and with a warm heart, so thank you very much. That was amazing.