EPISODE 36

October 10, 2023

Dr. Angel Acosta – AI: The Meeting of Consciousness and Technology

Thomas is joined by contemplative social scientist, consultant, and educator in leadership, social justice, and mindfulness, Dr. Angel Acosta. They discuss Artificial Intelligence, or AI, and the radical shifts it is generating in our lives, our work, and our collective consciousness. While AI holds incredible potential for the evolution of humanity, its creation and implementation also raise a number of serious ethical concerns.

Thomas and Dr. Acosta explore how this cutting-edge technology mirrors both the best and worst aspects of human society and how our approach to it can either create further discrimination and trauma or help us to better understand ourselves and collectively heal.

Share this:

Listen Now

“Maybe, just maybe, we can come closer to each other, understand each other better, and create the conditions to experience healing indirect consequences in the face of a force like AI.”

- Dr. Angel Acosta

Guest Information

Dr. Angel Acosta

Dr. Angel Acosta is a contemplative social scientist, consultant and educator in leadership, social justice, and mindfulness who earned his Ed.D. in Curriculum and Teaching from Teachers College at Columbia University. He is the convener of the Healing Centered Education Summit, Chair of the Acosta Institute, and Director of the Garrison Institute's Fellowship Program.

As a member of the 400 Years of Inequality Project, he designed the Contemplating 400 Years of Inequality Experience to support communities with understanding structural inequality through a mindfulness-based and contemplative approach. With an interest in better understanding collective trauma, he is currently collaborating with other scholars to develop group processes for collective healing.

For more information, visit drangelacosta.com

Notes & Resources

Key points discussed in this episode include:

  • How addressing the biases inherent in AI technology can create a leap forward in collective consciousness
  • The fear that occurs when new inventions arise, and how integrating trauma enables us to assess the situation with more clarity
  • The need to heal deep wounds in our collective unconscious so that they are not propagated by AI
  • How the widespread use of AI represents a major evolutionary moment for our species
  • Exploring what it would look like to develop AI for the specific purpose of healing trauma

Episode Transcript

Thomas Hübl: Hello and welcome to the Point of Relation. My name is Thomas Hübl and I’m sitting here with my good friend and I’m looking forward to a wonderful conversation with Dr. Angel Acosta. So, Angel, welcome to the Point of Relation. I’m happy you’re here with us and I’m looking forward to our conversation about AI and much more. So, warm welcome!

Dr. Angel Acosta: Thank you so much, brother. Congratulations so much on the Collective Trauma Online Summit. Another wonderful and powerful curation of not just your lineage, your wisdom and insight, and intelligence, but that of others. But thank you for that. And then congratulations on this baby – I haven’t written a book myself beyond my 238-page doctoral dissertation. But I know how much labor it takes to produce that, and I’m just really excited to see how “Attuned” relates to healing collective trauma. Your last text and just kind of celebrating all the contributions. I’m a big fan and I’m always trying to be in dialogue with you, even from a distance, reading your work and following your work, but also when we have moments like this, or we can talk. 

Thomas Hübl: First of all likewise, I enjoyed all our conversations and I am always very happy when we meet. I feel a lot of resonance with you, with your work, with what you stand for, and with what your spirit brings into the world. And every conversation I walk away kind of enriched and I felt like a deep connection. So I’m happy for our friendship and for everything, we give birth to when we speak. And so I’m looking forward to the day. 

And today, actually an impulse that came from you that we explore, like the rise of AI and how that might be connected to attunement, to collective trauma, to many other topics. Maybe you want to first share a little bit about your work for our listeners, and then we can dive deep into AI. 

Dr. Angel Acosta: Yeah. If it’s okay, allow me to briefly touch upon our relationship in terms of partly its history, but also its texture. As you were talking, I was kind of tensing into the maturity that has happened every time we talk, every time you have an opportunity to do a one-on-one conversation like this, I just got a positively overwhelming feeling of how much we both have grown. 

I just wanted to acknowledge that, if you recall, we started to talk more intensely during the pandemic, after George Floyd, around a lot of the racial justice and racial equity issues that were emerging outside in the collective, but also inside in both of our communities. And I just want to say that I see how much you’ve grown, and I want to acknowledge that. 

As far as my work, I consider myself a healing-centered educator and a contemplative social scientist. So I have spent about a decade or more trying to explore the relationship between mindfulness, social justice, and leadership. My dissertation explored healing-centered pedagogy in the context of education and had a lot of fun asking the question what are the conditions that people need to try to put in place in classrooms and organizations for people to thrive. But also to heal maybe and restore relations. It’s a really exciting and stimulating time trying to figure out an answer to a question. And one of the things that exploration has done is meeting people like you. 

Formerly I am the chair of the Cancer Institute, which is the organization that I’m slowly building. And there we engage in healing-centered education online, contemplative social research. And what we try to call flow work. And then I’m also the director of the fellowship program at the Garrison Institute in the Hudson Valley of New York. And there we support the next generation of contemplative leaders who think about their work and how it relates to social change, collective healing, and the curation of contemplative wisdom in relation to scientific inquiry. So that’s kind of my elevator pitch, Thomas. So that all situates me in just being someone who cares about what’s happening in society, but in particular what happens in the context of social justice, mindfulness and also technology. 

Thomas Hübl: That’s beautiful.  So I hope to bless your work to grow and to really be an expression of your experience and just find more and more ways into the world. I think you’re doing great work. And then I want to thank Otto Scharmer foe when the photo came to me and said, Thomas, you need to meet Angel!

Thomas Hübl: And I said, okay, I will meet Angel if you say so, I will meet Angel. And it was fantastic. I think it had an amazing feeling. So thank you Otto for bringing us together and initiating our friendship. 

Today we are coming here to get out so playfully in a way to explore AI and you came up with that during one of our other conversations. Let’s talk about AI more concentratedly. So what’s in there for you? What’s exciting? And then let’s see how we talk about AI and all the things that are going on around it. 

Dr. Angel Acosta: Yeah, yeah, yeah, for sure. As we know, artificial intelligence has been very present in the news cycle recently, but also just in terms of the development of our society in the last 50 years or more, just that it becomes more intense now as these large language models and these forms of AI like generative AI are becoming deployed for the masses. From my understanding, I think this is a major evolutionary moment for the species. I think from what I’ve read and what I understand from leading scholars: this moment is as significant, if not more, than the Industrial Revolution in terms of radically shifting how we relate to work, but also how we relate to ourselves and how we relate to information.

In light of that evolutionary potential, I think it’s going to impact how we talk about the work that you and I are so interested in in terms of collective healing, collective trauma, conflict resolution, just finding resonance, and a sense of adaptability with a planet with growing challenges. In particular, there’s so much because of how fast this thing is unfolding and how much information there is around it, with its potential to do harm and its potential to do good – there’s so much fear around it. Some fundamental expressions of that are seen and how people say that artificial intelligence will replace jobs. Some real consequence of different forms of AI is that it will reinforce inequality. It’ll reinforce the wealth gap. It will reinforce racial discrimination by the way that AI is deployed to police to identify using facial recognition, and sometimes, fourthly, to give loans to people, to deny loans, to make medical diagnoses, which will misdiagnose those who aren’t eligible to these preprogrammed pre-trained models.

So at the core, you always see someone and many others who have spoken about evolutionary leap. And I think you’ve been trying to walk beside us to get to an evolutionary leap in terms of collective consciousness, that’s really what I think you’re trying to do. So thank you for holding our hands. So as I just found it an opportunity to talk to you a little bit about I find it an incredible opportunity to talk to you about artificial intelligence, the fear around it, the potential around it, the enormity around this historical moment, and just your ability to talk from a collective place, the most historical in nature.

Thomas Hübl:  Yeah, that’s beautiful. So first of all, thank you for initiating this and also opening this space. Yeah, I would love to maybe first bring a few components into a space that I think are all important when we talk about collective evolution, because for sure I also see technology and the current technological development, including AI, as part of the acceleration of consciousness. The acceleration means the acceleration of data, the acceleration of process, and the acceleration also that every time we make a leap or we go to a new level in our consciousness development, there is an acceleration which is not the same as being fast because of stress. I think that’s very important because many people, when they hear acceleration, they think about, oh, it’s getting even faster, but slowing down because you talked about “slow work” before – that acceleration is not a contradiction to slowing down, because when we look at trauma and collective trauma, I think the non-sustainable way of living is that a lot of processes at the moment in our bodies, in our psyches and in our societies that we create together are based on trauma stress, which means the whole system is too fast, but not because of the information or data flow. It’s too fast because of too much stress. And that creates like we are eating, we are overconsuming the resources of our own bodies. It’s unsustainable. This leads to health issues or mental health issues. 

Then we are also exploiting nature because we can’t do it any other way, even if we have morals telling us to do it differently, we are still wired in this trauma stress to not live sustainably. I think that’s why the sustainability movement, we need to look also at the inner unsustainability. I think the trauma stress in our societies is one element that needs actually slowing down in order to heal. But when consciousness rises, there is an effortlessness and a new level of data flow and acceleration I’m talking about, and I think consciousness and technology have a meeting point there. The only two elements that we can zoom into, are curious what you think about this and then maybe dive deeper into some of this. So is the trauma stress and the consciousness acceleration technology. And then there is collective trauma that creates a lot of fear. Tons of unprocessed fear in our collective, I call it in “the dark lakes of our societies.” So the collective unconscious is full of fear and pain, unprocessed from the past. That will inevitably come up in individuals and societies when new  things are happening so that we need to deal with that somehow. 

But there’s a third component. I mean, there are many more maybe, but let’s keep it for our conversation. There is another component that I often say our ethical line of development is partly frozen in the permafrost of our unprocessed past. The pain of our unprocessed past. It’s like thick ice. In that ice is the ethical evolution that we’ve never had because it’s frozen in there. So all the transgressions that led up to trauma that is still stored in humanity, we never got the ethical restoration, never got the ethical update. If then we are thinking about it, but we can’t live it because it doesn’t go through embodiment. So we can think right, even books about the Holocaust or other traumas that are going on or racism, but it’s still happening because it doesn’t go in, it doesn’t touch us in the deeper places. That’s something we should talk about because of new levels of innovation – without that ethical growth, that’s a delicate point, I would say. Besides, I mean, it’s a big topic. That’s it. 

Dr. Angel Acosta: Yeah, now, there’s so much there, and this may have to be a multipart discussion. The ethical piece that you point to is very much connected to the development and deployment of artificial intelligence with the ways in which some of these large language models, great data from the Internet or other sources oftentimes that people don’t give consent to. The ways in which when you think about AI as an enormous force that can manifest. In an example, ChatGPT or any kind of AI that you input data and can give you, can synthesize for you, can process or you can do so much for you, whether it be produce images, produce video, translate, transcribe, think with you in a sense, be a form of intelligence. But how that process is actually connected to an assemblage and a network of exploitation is suffering. 

So it’s not just the software or the process that you can engage, but it’s the value of the people who have the code. The fundamental data that ended up training the model at low-wage labor. It was the water that had to pool the computer bands and superconductors that produce the data in the first place. It was the digging of tunnels that produce the networks of fiber optic cables that connect us to the Internet. 

So there’s this other superstructure or infrastructure that has ethical implications in terms of our use of this technology that I think it’s important to acknowledge. Because sometimes we talk about it as if AI is not AI, but a network of assemblages, some philosophers might say. So that’s interesting to me. And then something you said around when consciousness evolves, there’s an ease with which there’s a seamless adaptation, connection to maybe some technological revolution. And I think that was important for me to kind of pull out because I don’t think we’re there in terms of artificial intelligence. We’re not out of place, at least most of us think we’re in the place where our consciousness has been elevated enough for us to really know how to use this exponential tool appropriately, ethically, and that, as you mentioned before, the fear around whether these kinds of technologies are going to take over or going to create more harm. That fear is paralyzing. And I’ll end by saying, I think there’s something to your work and the work of your communities, of working with that permafrost, working with that deep-frozen trauma. In the context of collective trauma, that could inform how we process the frozen fear in response to the uncertainty relating to AI. That’s my piece. 

Thomas Hübl: I completely agree. That’s wonderful. I also think you said so many beautiful things. First of all, when we look at trauma and we look at how trauma replays itself through anxieties, uncertainties, fears, all kinds of stuff that’s coming up. That’s like a fog between us. It’s like we can fully see each other when it’s playing. So there is fear. Like, that happens when a real threat comes into the room like a tiger or a terrorist or something. You need to save your life or you need to run away, do whatever you need to do. 

But often our fears are not connected to the real experiences we have, but we are afraid. Some people are afraid to give talks publicly. Some people are afraid to take exams. Often when new inventions come, then also those fears come up because our prediction models don’t work in the same way. So every new stage is actually a leap of courage to move. At the same time, I feel when we integrate trauma, we have more sense, like a healthy sense. There is a gut feeling, it’s a heart feeling. It’s our rational thinking, it’s our spiritual connection. It’s all in our ancestral connection. All together create an alignment that gives us a sense of what’s right and what’s not right also ethically. 

We have actually two benefits from integrating the permafrost. One is we are more grounded and connected to the biosphere, and the ecosystems should be a part of something more relational, more attuned, and we have less fears that are actually just the past replaying itself, and they are slowing down the process of innovation. But on the other hand, we have more of a sense and feeling to create things in the way they are needed in order to create less harm and us not to be steered by economy and money and success and fame and all this – because we know the things need their time and the time that they need to be developed with less and less collateral damage of the shadows and losing the motivation to start with. 

So when you talk about like fear is an element, also, the more we integrate trauma or collective trauma, the more we will be part of healthy innovation and we will have a better sense of healthy innovation and the healthy scientific breakthrough application, like how we teach genetic engineering, nanotechnology, artificial intelligence, and whatsoever a step further without creating a lot of bigger damage new cycles of re-traumatization that at the beginning it seems we often don’t see. But then later on we see how the shadow of the past is going to come back. But just a bit more mighty be so it doesn’t, you know, but it doesn’t mean that the invention itself is good or bad. It’s just a new level of evolution. 

The other thing I wanted to speak to is also. I want to point out something you already mentioned is that if AI learns, first of all, how the data is being used is an ethical question and needs to be examined really deeply. I think that that’s very true. AI learns from a system that is not even aware, how much it is not aware of what it’s not aware of. We are not aware of what we are not aware of. And we just see symptoms, conflicts here, war here. In the Middle East, some fires and this is happening and this. But we are not aware what are the components that create this. We are still connected to some superficial reason sometimes, but not to the real process that is outside of our awareness. 

So when our nervous system is not aware of some of the processes collectively because they’re stored somewhere in a collective unconscious, then AI as a learning system is actually digesting or incorporating data that looks through filters. We see the racial biases with it and all the consequences that will or might have if we don’t correct it. 

That’s one element that has a severe traumatic impact. And then there are many others and I think that’s what we also should talk about, how machine learning is learning from a system or a data pool that has a lot of unconscious processes into complexity. And sometimes people say, oh, complexity, you know, is unpredictable. And yeah, maybe that’s true, but unconsciousness in complex systems very much amplifies the unpredictability of complex systems. And I think that needs to be included when we build artificial intelligence. Like, what kind of data are we learning from? Maybe what’s the potential of an artificial nervous system as a mirror for our nervous system we are not aware of. And I’m curious what you think of it. 

Dr. Angel Acosta: Yeah, there’s a lot there and there’s a lot that I wish I could rewind that. And I will after this. You know, I think just to land on the last thing you said around a kind of an artificial nervous system as a mirror. I think that’s really powerful because in that mirror as you point to, you can kind of trace the bias. You can trace what’s being left out, what’s being ignored was being amplified in terms of our fears or expectations. And that’s a really big problem in terms of the bias that some of the developers and the coders of some of these technologies bring to bear and how those biases shaped the behavior of AI and by extension of the metaphor that you just mentioned, “shaped the collective nervous system of AI.” So I think there is a rich learning there for us to learn about. What is it about us that would create the conditions for something that we created to behave that’s so limiting in such limiting ways, and at the same time, I want to also hold the potential that this can be an extraordinarily positive opportunity for us to leverage artificial intelligence to aid us and on the one hand, maybe processing our collective trauma. Maybe refining our perspectives around what we are unconscious about? Like, what would that look like? What would it look like to develop artificial intelligence that was built to help us map our collective unconscious? You know, that’s a question. And I think that I haven’t asked that question before. I’m asking because I’m talking to you and I’m thinking freshly with you. But that question comes so naturally. Partly, I think, because I’ve spent a little bit of time processing my fear around AI, so much so. Or at least enough that it allows me to hold both the parts of it that may generate incredible levels of harm. But also ask certain kinds of questions about its potential in a positive way. And I think there’s something there around being grounded enough with this where we can both ask really powerful questions around its potential to have a positive impact on human evolution and how is it a tool for it? At the same time, you know, paying attention to how it can do the opposite and actually generate regression in terms of consciousness leaps with the level of bias that it could reproduce. 

Thomas Hübl: Yeah, I love that. I think, as we are talking, I think we need a multi-episode because it’s just getting started and already so many windows are open. I also made some notes for further conversations with you about this. I would love to have at least one or two more because we have a little bit of time left. But I think first of all, I’m all for it. As you said, I think it’s just important for us to think in both ways, like what’s the incredible potential? Because I also see incredible potential. I also see an incredible potential for technology to take over a certain level of life sustaining us on the planet in a way that frees up human consciousness to go to other dimensions. I think that there’s something there about the evolution of our consciousness that technology and the current development is here to support us. So I want to put this also in place. I don’t want to just look at the shadow sides, as you said, but I think it’s important because the learning happens. Because humanity is learning us being deeply committed to live and to walk a life full of integrity, vulnerability, the correction of our own mistakes, the willingness to constantly learn takes the willingness to be deeply related and be updated from. Conversations like this update me. I always learn something new. 

So in every group, I think that’s a very important function to be always open and bow down to what life teaches us. But it seems like that’s a human function. But that’s not only a human function that also has a translation into the intelligence that we create. And I think we are looking at AI already as something separate. On the one hand, it’s kind of even a good process because it’s as if an alien were to land here, like an extraterrestrial coach – what would an extraterrestrial coach us? What would he, she, or it – what would they teach us?

How would intelligence that comes from outside (that’s a question mark too,) I think there is a moment of awakening that comes at the beginning through fear. Like in the other ring of artificial intelligence is also like a recognition of humanity’s “is-ness” this because so suddenly there’s a beginning of another that is there is othering within humanity, but there is no other to humanity. But now there is another to humanity. Like it’s oh, it’s artificial intelligence. And I think that’s based on trauma. But on the other hand, it creates kind of a bit of an awakening and it grows a certain kind of perspective that is important, but it’s also important that we learn to transcend it and that we see all of our movements inevitably influence the future with or without artificial intelligence. 

We saw it the last thousands of years, and it’s going to continue. And I think that’s that impulse that we all send into the future that is a design factor for the future is important. And so our ethical ways of moving, I think, are crucial because I don’t think we are in a place where, okay, tomorrow is taking over and I don’t think that that’s what’s happening. So it’s very important. How we are steering through this moment is also generating the outcome. The outcome is not said thing. And yeah, I think that that discussion I would also like to deepen another discussion. I want to open one more window and then we see how to tie them together and maybe come back and deepen some of them. I think the issue that secular modernity has – is that a very, very important function, the influence. If you want to say it secular in terms of the future. If you want to say that in more religious terms the divine God. It has an agency. It has an impact. There’s something downloading itself. There’s not something you’re just going to experience, also, in a way, higher consciousness has an impact on life all the time. And I think we see it with certain people, artists, luminaries, whatever, inspirational forces in their fields or breakthroughs or communities that develop something. There is like an influence on the future. And we could say on the one hand, AI is also downloading itself into humanity from somewhere. But we could also say there is something about life that has not only a bottom-up, it has a top-down mechanism, it has a vertical line of development. And I think it would be great for us – maybe we can leave this as a cliffhanger for the next session. I would just like to hear your first responses to this, about the role of that influx of information. Information, it puts something in form and how the relationship of AI and the vertical influx, I think that’s an amazing conversation to have. 

Dr. Angel Acosta: Yeah,. I think so, too. There’s so much there around, as you said, the struggle and the challenge of secular modernity and its relation to the divine or the spiritual. So I do think that at this moment AI will generate an existential crisis that’s going to be generative. I welcome this discussion, especially since I might be closer to secular modernity in terms of beliefs and can wrestle with the role of spirit and the divine in a really productive way. So yeah, I welcome it. And then I also say what you said around maybe artificial intelligence can play a role in the way that we situate ourselves in response to it. I was recently in a public conversation with some colleagues from this organization called The House of Beautiful Business, led by Tim Leberecht and a couple other colleagues. There’s a really powerful synergy between you and that and that organization. 

We had a conversation around the question: Can AI heal us? Can artificial intelligence heal us? and it’s definitely worth checking that conversation out. But more importantly, the question is almost unanswerable. Like you can’t answer it because it can do both. It can heal us, it can harm us. But what’s important is that attempting to answer the question and attempting to really make sense of how our fears and our responses to this evolutionary moment in regards to artificial intelligence will push us to have conversations with each other around what does intelligence mean? What does humanity mean? How does our humanity change in the face of such an evolutionary moment? What does it say about our intelligence? And there, I think. By the pressure of having to respond to those dynamic questions. Maybe, just maybe, we would come closer to each other, understand each other better, and create the conditions to experience healing indirect consequences in the face of a force like AI.

For example, the question you just asked around what is it about the divine? The spiritual? Or that which comes from beyond our senses that enters our lives, whether we can describe it, explain it, justify, research it. That very question is an example of how AI is pushing us to ask different questions, especially if we have the strength and the fortitude to be in a rich dialog like this. I’ll give it back to you to say that I would like to end this conversation by inviting you and myself to ask what other questions that you would like to ask? What you have been asking around this subject? And then let those questions be kind of a little bit of a buffet for us to kind of think through in future dialogs. 

Thomas Hübl: Can AI heal us? Can AI contribute to the collective trauma healing? I think that’s a very important question. Kind of a digital nervous system. As you said, AI is just a culmination of a long time of evolution and building. And just now the algorithms that we are able to program. And so all that led up to that moment and will outrun that moment, I think is important.

But is there a reflection? Can it illuminate the collective unconscious? That’s a great question. How about inspiration? What inspires AI? Is it the recombination of data? Is it that lateral recombination that has at the beginning a lot of power but actually will burn its fuel over time? Or can it just because if you recombine a lot of data, then it looks like progress for some time, which it also is, but it’s also a lateral dimension of progress. Is there a vertical dimension of progress and what fuels that vertical dimension? And are we being enrolled as humans to be that vertical dimension of progress? Or is there a vertical dimension of progress just in large language models or any kind of evolution of that. I think that’s an interesting question to ask. 

So what’s the future whispering to AI? Or does AI create its own future? And what is the future that’s calling AI or us as living humans, or are we living as AI? So these are all great questions that I think is AI separate from human consciousness or is it part of it? Are we all part of one? There are many questions I think you can ask. 

Dr. Angel Acosta: Yeah, we can talk all day and I think maybe the question you set around the agency that the divine might have or that God might have has always been a longstanding question in theology and in spiritual communities, regardless of where you stand around God and deities. So you know, whether God exists or not. And that’s always been a really provocative question around the agency, the power of the spirit of spiritual forces. And then connecting to the point you named earlier around the limits of secular modernity, of not having the language to talk about God agency and having enough language to talk about the vertical inspiration that can come through that might be influencing AI. 

So here’s what I’ll leave you with, I think us wrestling with AI in terms of its potential agency. Its potential evolution might give us more language with which to talk about the agency that God might have, and the agency that the divine might have. There’s something there that might help us to think through that. I’m already feeling really enriched by that kind of exercise and thinking. So I welcome that question. I think we should definitely ask it even more. And then as far as questions that I have, I’m curious to know what I can teach us about humanity. Maybe not directly teaching us there, but what, what, what, what is our response to AI about who and what we are? Our nature, our limitations, our potential? I’m interested in that question. I’m also interested in what will be the consequences if this kind of technology goes unchecked. And also what happens to technology when it is made available to the communities that have been historically ignored or marginalized. What is that technology connecting on how to do both materially? Maybe spiritual. Maybe emotional. You know, there’s so much, brother. I’m just really curious about this moment, and I think it’s a moment where in light of that artificial technology can do so much thinking for us – it would be the true testament of those of us committed to the change and transformation to try to keep our thinking as fresh as possible. 

Thomas Hübl: Amen!

Dr. Angel Acosta: So hopefully, one day I don’t encounter Thomas Hübl. And I don’t make the mistake of thinking that it’s actually you. 

Thomas Hübl: It’s powerful words to close our conversation for the day, Angel. It’s a joy. It’s fun. It’s deep, it’s inspiring. Thank you so much for this. And we will have a part two and the part where they’re very, let’s see how much you need. I’m very open to continuing this and we have lots of questions to leave in this space for everybody who is tuning in right now, please. The questions will work for all of us, and then come back. Thank you so much.