Design Mind frogcast Ep.57 – Writing the Future of AI

Our Guest: Sheldon Pacotti, Technology Director, frog, part of Capgemini Invent
Podcast

On this episode of the Design Mind frogcast we welcome Sheldon Pacotti, Technology Director at frog North America to talk about on using AI to create meaningful connections and the power of the parasocial.

Sheldon’s passions for technology, fiction writing and game design have influenced his work and his life. And may inspire you to write your own chapter in the world of design.

Listen to the podcast episode and watch the full video below. You can also find the Design Mind frogcast on Apple Podcasts, Spotify and anywhere you listen to podcasts.

For more on the co-evolution between humans and AI download our latest frog Futurescape report ‘Artificial Realities.’

Episode Transcript:

Design Mind frogcast

Episode 57: Writing the Future of AI

Guest: Sheldon Pacotti, Technology Director, frog, part of Capgemini Invent

[00:00:03] Elizabeth Wood: Welcome to the Design Mind frogcast. Each episode, we go behind the scenes to meet the people designing what’s next in the world of products, services and experiences, both here at frog and far, far outside the pond. I’m Elizabeth Wood.

[00:00:19] Elizabeth Wood: Today on our show, we’re talking about the interplay between worldbuilding in a creative sense, such as within the context of gaming and literature, and in the context of using technology to design the world around us. To do this, we’re joined by Sheldon Pacotti, Technology Director for frog North America. Sheldon talks to us about the evolving role of AI agents, the concept of hyper-personalization and the potential for AI to create more meaningful interactions.

[00:00:45] Elizabeth Wood: Alongside his work as a technologist at frog, Sheldon is also a game designer and an author of fiction, so he has a lot to share about the ways in which his passion for technology and writing influence one another. Here’s Sheldon now.

[00:00:58] Sheldon Pacotti: It is a two-way street. My technology work does spark ideas for my fiction, and vice versa. I’m actually currently writing a novella where the enterprise is becoming fully autonomous, and so that’s that’s the question I’m asking: how does this feel to the employees, and what is this enterprise accomplishing when it’s moving faster than humans can even quite understand? So there’s definitely a two-way cross-pollination going on.

[00:01:21] Sheldon Pacotti: I’m Sheldon Pacotti. I’m a Technology Director. I lead the solutions architecture group at frog North America.

[00:01:27] Sheldon Pacotti: Fiction writing actually is very much design, especially, I do write a fair amount of science fiction. It’s not the only thing I write, but I write near-future stuff where I have to think about augmented reality and virtual reality and AI and different things like that. And so I find myself often asking, really, how is this going to work? If you have a contact lens in your eye that’s beaming stuff on your retina, and then you’re in communication virtually with all these other people, what are you really going to want to do? Or how are you going to signal to the system that you want to bring up a data visualization or something? And there end up being a lot of mundane questions that I end up asking just writing a story. And so it kind of fills my head with ideas before we ever get to a project where we’re trying to design something that’s somewhat futuristic.

[00:02:08] Sheldon Pacotti: I have the same origin story as most technologists my age. I’m in the mid-50s, and so I bought them all those department store computers from the 80s. And these were like $100–200 little toys you could pick up that were barely computers from what we would call today. And for me, the reason I wanted it so badly is I was seeing video games in the world kind of showing up in every place you could imagine, like laundromats and restaurants and stuff. Really wanted to make video games, and it was all about just trying to make these experiences that I thought could be possible. They weren’t really out there in the world yet, but I imagine these scrolling video games with like forests to explore and things like that. And so that was pretty formative I think, because it turned me into a technologist and had tons of fun with it once I learned to code and everything.

[00:02:49] Elizabeth Wood: Sheldon’s passion for coding and gaming led him to studying at MIT and Harvard, but it was during his education that he began exploring other creative paths beyond conventional engineering.

[00:02:59] Sheldon Pacotti: I think throughout my life, each of the key moments, I was striving for some kind of opportunity to create an experience that wowed people. And I think that’s part of what took me out of MIT. I think the education there was fairly mundane in some ways. I was learning engineering and whatnot, and wanted to write fiction as well. So I ended up taking a course at Harvard, I cross-registered there. I tried really hard, and the professor there wrote a recommendation for me that eventually got me in there as an English major. So I ended up kind of doing both degrees. But again, it was kind of the striving for entertaining people, or just kind of making them experience something. For me, what made gaming so engrossing is that it’s not just writing a linear story or, you know, imagining a certain fixed experience. It’s also all the logic and possibility and exploration that a video game brings.

[00:03:47] Sheldon Pacotti: So that where I found my sweet spot in the business as a professional but also just creatively, is where you’re inventing a fictional universe, but you’re also thinking about all the things that the player/user is going to want to do, and accommodating that, you know, with code and logic and all the technical pieces that go into that. And I think in terms of, I guess, my professional career as a game developer, the area I focused on was narrative design and writing. So I tend to approach interaction from the notion of, you know, what is this experience that the the end-user is going to inhabit? And so I think that perspective is always there when we when we start a new project.

[00:04:27] Sheldon Pacotti: You know, we have many of these corporate projects where we will be doing gamification or something, that’s the corporate word for bringing in some of these interactions. And I think what I always have from the game business is there’s the really strong effort to create meaning. You know, when you’re creating a fictional game world, you’re creating interactions where you care about the gold you’re collecting, or whatever you care about the thing you’re going to be able to do with the item you found. And so in the corporate world, when we look at gamification, you’re scoring points and you’re able to buy things and whatnot. So how do you make that actually meaningful? Not just make our website have this thing in it that everybody else has, but how is actually a meaningful experience for the user?

[00:04:55] Sheldon Pacotti: I think the key to achieving any kind of truly meaningful experience is that model of the user—and in the game business that’s the player, and you care about how powerful they are, what class they are and things like that. In the corporate world, you care about who they are—and so that’s a point of view that we bring a lot of times to projects where we look at the data across an organization. What do you know about this user? Have they talked to customer service? Are they having a current problem? Are they a pro-user or a high-wealth-individual? If it’s a bank, right? You know all the things you know about the person, then how do you tailor an experience to that and respond to them so that they feel like they’re being seen and understood?

[00:05:32] Elizabeth Wood: At frog, designing and building products and services goes beyond designing interactions to serve user needs–it’s about understanding the influence these novel concepts might have, imagining scenarios where these concepts play out, and being intentional about their impact. Here, Sheldon shares more about the technologist’s role in this process.

[00:05:52] Sheldon Pacotti: Technology at frog is a little different than at most companies. We tend to be on very small teams, cross-disciplinary teams, where we might have one or two technologists with one or two designers, strategists and so on. So we’re very tightly integrated. We are doing everything technologists do. We’ll code proof the concepts. We might do alpha builds of things, hands-on coding architecture and whatnot. But also a lot of strategy, a lot of co-creation. And so what we do as solutions architecture group is bring pretty much that kind of full kitchen sink of capabilities to a project and work with a team to get things built and done, but also to hopefully help them be thoughtful about how they use technology.

[00:06:30] Sheldon Pacotti: One thing we try to accomplish as technologists is to bring an understanding of the underlying technology to a team that’s kind of very generic, but I think there’s two parts to it. One is there needs to be a realistic understanding of what the technology is. Then also, you know, what it means to the end-user. What’s the impact on the user? I was thinking like a sci-fi example I bring up a lot is the way that science fiction writers approached escalators back in the 40s and 50s. They imagine them being built out throughout all of society. So you’d have escalator, you know, moving sidewalk, highways, where you could get into one lane, it would be 10 miles an hour the next lane, 15 till your fast lane is going 70 miles an hour to take you to the city. I think we have a tendency to take a technology and just extrapolate into the future to the extreme. We’re doing it with AI right now, with these conversational interfaces. We’re imagining that there’ll be no UI, there’ll be nothing else. We’ll just be talking to our bots.

[00:07:18] Sheldon Pacotti: I hope that as technologists, we help teams understand what technology really is likely to be used for, what it can become, but then also what that impact is.

[00:07:26] Sheldon Pacotti: Back to social media as an example where technologists could have done a lot, I think, to educate companies when they were building algorithms around social media about what impact it was likely to have. Maybe we couldn’t have known all the ramifications. But I think that’s an example where if your KPI at your company is to maximize clicks or whatever, there’s another side of that, you know, not just the advertising dollars, but what’s happening to the person on the other end.

[00:07:48] Elizabeth Wood: The continued use of social media and our always-online lifestyle has given rise to parasocial relationships. These are onesided relationships where someone feels a sense of closeness or familiarity with a person who doesn’t know them. This often happens with influencers, streamers, celebrities, and now even AI, where constant digital presence can create the feeling of a real connection without any mutual relationship. During our conversation, Sheldon shared his view on where these parasocial connections might lead us.

[00:08:19] Sheldon Pacotti: I think, to really understand parasocial in today’s world, we should go back to where that term was first being used, the way we’re using it today, which was back in the 1950s where psychologists thought that there were essentially pathological, social and or emotional behaviors that were emerging because of mass media, where just sitting in front of a movie they saw as a parasocial relationship. There’s a great phrase they used, which was “simulacrum of conversational give-and-take” or something like that.

[00:08:45] Sheldon Pacotti: So they were looking at just a movie back then, as a simulation of a relationship. And so for me, the reason it came to mind as a trend that’s happening today, I had a specific client in mind, and it was a regional bank where part of their identity was that they were very friendly and that they cared about their each individual customer and everything. And they had a KPI where they would grade each branch on the amount of time it took for somebody to be greeted, but when you entered the branch. And so as researchers were in the field researching, we of course tried to test this, or we tried to sneak into the bank, or quietly kind of collect somewhere, and we couldn’t do it without being greeted very quickly. People would come out of side offices to greet us. So it was a very core part of the bank, and our project was to help the bank transition into a digital world. So this was a web and mobile at the time we were doing it, and so we were asking all those questions around, you know, what is this essence of your brand? What do you know about the people you’re talking to, so that it is a personal relationship? How do we embody this in the platform and the experience? And so today, I see with conversational interfaces and AI, that’s becoming possible, in a way it wasn’t possible then, where we really are talking to users, and that’s a question every organization is now facing, is, well, if we’re going to have a conversation automatically created for each customer, how do we make that actually express our brand, and how do we actually make it a friendly, personal, meaningful conversation?

[00:09:59] Elizabeth Wood: Thinking further out about the power of the parasocial, Sheldon believes things are evolving in unexpected ways, especially with the rise of AI and hyper-personalization.

[00:10:10] Sheldon Pacotti: Yeah, I think the parasocial phenomenon in modern times is on track, I think, maybe to an unexpected destination. The midpoint was social media, where you have likes and comments and things like that. They’re sort of, you know, virtual, but yet somewhat real. There’s still a real person at the other end. So that that’s all termed parasocial, because, you know, really, especially if it’s a celebrity, you’re not, is really a one sided relationship, and that’s kind of the core parasocial. But the question in my mind is, what happens with AI when it’s not entirely one-sided, where the AI really does understand you and does respond? And there’s this phrase I’m hearing tossed around: hyper-personalization.

[00:10:42] Sheldon Pacotti: Hyper-personalization starts to become possible in this relationship. So is that actually still one-sided, or is it actually even more social than the case where at the bank, someone rushes out and greets you you’ve never seen before in your life? Now you go to the bank and talk to an AI, but it actually knows everything about you and is friendly, and gets, gets the thing done you want to do. Is that actually more social than than the previous state of affairs?

[00:11:04] Sheldon Pacotti: Yeah, I think with an advanced AI system, the core question is, is it creepy or is it cool? We actually had a demonstration at South by Southwest years ago, where it was a robot that was sort of your companion and watching you, and had a camera and an eye and everything. And we took a poll, is it creepy or is it cool, right? And I think that’s the case with AI, where it’s probably a little bit of both. It’s somewhat dystopian to think that these systems will know everything about us instantly, but then it’s also probably very helpful and comforting that you actually are understood. You’re not just a number. You know when you call into a phone system today, and then some of these are semi AI-driven and just doesn’t understand what you want, and you can’t get to a real person. That’s That’s very frustrating, but if it actually worked, then maybe there would be some comfort there.

[00:11:48] Elizabeth Wood: We’re going to take a short break. When we return, Sheldon will share more about agentic AI and the principles of building agentic AI systems.

Featured in this episode is the recent frog report ‘Futurescape: Artificial Realities.’ 
Download the ReportCTA Button Arrow

[00:11:57] Elizabeth Wood: Now back to Sheldon Pacotti, Technology Director for frog North America

[00:12:32] Sheldon Pacotti: The definition of agent is probably a little fuzzy in terms of how it actually shows up in technology, but essentially, it’s a very sophisticated model that is able to use tools. And tools are essentially any action that the AI can take. And I kind of think the fact that the model is sophisticated and we don’t quite understand what it does is part of the definition, because that’s what makes it feels like a really an entity or a, AKA agent.

[00:12:57] Sheldon Pacotti: Today, typical actions are pretty basic. In my coding environment, the agent is just now able to do shell commands for me. So run commands on my computer to install software. I think it’s wonderful. So it just starts. I ask it to write some code, and if it needs a dependency, it starts to install. It prompts me to see if it’s okay, and then goes ahead and does the action. So agents can do basic things like buy things for us and whatnot today.

[00:13:17] Sheldon Pacotti: There is the release of Operator from OpenAI that basically will read your web browser and go click and try to do stuff for you. An application that is pretty near-term, perhaps, is that an agent could be helping to buy and sell things for you or negotiate a price, or something like that. We may start ceding a little bit of decision-making power to them, and then if, in the mix of that, it can actually take an action in the world, then we consider that an agent.

[00:13:39] Sheldon Pacotti: Yeah, I’m not really sure how most people are taking the news that agents are coming into their lives. I do think there are two sides to it, though, there are really two problems to be solved. One is the enterprise representing itself with agents and/or an agent. There’s that, kind of, that craft that has to go around making sure that that entity is transparent and trustworthy and does the job righT, actually represents the company well. And then there’s the human side, the individual side, and we’ll have our own agents. And on that side of the equation is being able to understand what the agent is doing, you know, what boundaries are set around it, making sure it has your interests at heart, that it understands you. So I think there are those two areas. I don’t think either of them are solved yet from a design or technology standpoint, I expect there is a lot of anxiety right now around this notion of agents and what they will actually bring into our world. As a technologist, I have some anxiety that I don’t really want to turn over control of any process to an entity that can kind of just decide what pieces of code to run, right so that thought still makes me a little nervous, but it is coming, and we’ll be needing to build systems that can do it gracefully.

[00:14:42] Sheldon Pacotti: Trust is actually the design principle that’s most important for AI agents and systems of agents actually wrote a general and futurist essay A while back for the Copenhagen Institute of Future Studies, where it predicted the future of computation like out for many years, and actually put agents as coming in 2026. So ’26 was going to be the year of the agent and for progression of computer design and principles and whatnot, the key to that was going to be a very crisp and honest representation across the Internet of everything that’s out there. So products you could buy, things you could do, what a company is, who you are, and I imagine it’s being done with technologies that are not going to be relevant today, I don’t think like micro formats, ontologies, these things on the web that would tell you what things were. But that’s still the enabling factor now. And so I think that as companies start to build agents, they’re actually defining tools and what agent capabilities in natural language. That’s actually the medium being used to create the capabilities that agents are using, and that’s how agents will probably communicate with each other. So one company using another company’s services or products or capabilities will probably be communicating a natural language to some extent. And so then it comes down to just telling the truth and not lying. I kind of wonder whether we’ll have court cases where AI agents are accused of lying to each other. I’m not sure, but that really is the starting point.

[00:15:59] Elizabeth Wood: For Sheldon, the challenge and the opportunity is about understanding how a company can differentiate itself in a competitive landscape where many businesses are using similar AI technologies. For consumers, it’s about a shift in mindset to engage in a more interactive and collaborative conversation with AI to provide specific and valuable input, rather than just passively receiving responses.

[00:16:22] Sheldon Pacotti: As a business to differentiate yourself with AI technology, you can’t necessarily rebuild the entire engine of the AI, how it converses and what it does yourself, but what you do have control over is the data inside of it. And in a more personal way, that’s what your product is and who your customers are, and so that’s where you can differentiate and provide not just the technology, but the entity, the agent, the interlocutor, that sets you apart from other companies.

[00:16:49] Sheldon Pacotti: One of the pitfalls right now in AI is that it is perceived as a black box, and I think that’s where companies get in trouble, is when they think they’ll just grab this thing off the shelf and it’s going to do everything they want. It’s still like every other technology, something that’s engineered. It is part of a more sophisticated workflow or set of systems. And there are actually multiple ways you can use one model and prompt it in different ways in a sequence of steps to accomplish something. Ways you want to mix models together, ways you want to use more traditional AI technologies like embeddings to just kind of find a semantic neighborhood, or something like that. So whenever you get into wanting to accomplish something specific for a business, you want to be thinking about first, what is you want to do, and then translating that into an actually designed and architected system, which is going to be a lot more complicated than you initially imagine, as a super brilliant conversational black box.

[00:17:39] Sheldon Pacotti: My initial reaction is that the coming of AI agents is the next level of disassociation of humans from each other, right? And on the surface, it seems like a dystopian situation, and there probably is a dystopian element to it, but I do think that they probably open up many opportunities for people to come closer together, in that they will create more ways for us to connect.

[00:17:57] Sheldon Pacotti: I’m reminded of a friend of mine had a business idea where she wanted to create a system that would help you plan a trip with a bunch of friends together, which is really heinous process of picking hotels and events and timings and all this kind of stuff. So that could be a piece of software. But if you imagine a world where everybody has an agent and you can kind of agree on a few parameters of what you want to do, then agents can actually make it easier to come together and do something together. I do think AI and automation in general is going to free up a lot of time. They’ll be doing a lot of labor for us. I’m using AI for my coding, and noticing about a 25x improvement in efficiency, counting all the stuff that I do, like writing a prompt and debugging and whatnot. So I do think that also may be a positive thing, where we are not necessarily needing to work 12 hours a day on a deadline to get something done that we may actually start moving toward a leisure society in some form.

[00:18:45] Elizabeth Wood: For Sheldon, if there’s a lesson that we can take from the past to stay ahead of change and even thrive in the face of some of this uncertainty, it’s about not surrendering full control to technology’s advances—or letting potential negative influences feel inevitable.

[00:19:00] Sheldon Pacotti: In terms of learning from innovations that have gone well in the past. We could look at the early internet days of the 1990s and the rise of Google. Not all of your viewers will have been alive to experience this, but the internet in the 90s was essentially you went to a web page full all kinds of stuff you didn’t want to see, and then you typed in a query like, you know, presidential election, and then a whole page of pornography links came up, and somewhere in there, it was something about the president of Venezuela or something, right?

[00:19:24] Sheldon Pacotti: And then Google came along, where they had an algorithm that was able to find what was valuable out in the internet, and suddenly the internet worked, and everybody went to this one destination, and that’s why Google is so powerful today. So I do think we can always look at the past where we think that a technology is actually disruptive or pernicious in some way, and there’s probably a way to make it valuable.

[00:19:44] Sheldon Pacotti: As a technologist, moving into the world of AI, the one piece of advice I would have is not to take the AI behavior or whatever it is or model that you’re working with as a given. There’s always some way to change the model or use it in a different way than might be expected. And you really want to keep in mind what you want to accomplish and what you want the technology or product you’re building to bring to the world.

[00:20:07] Sheldon Pacotti: I think the key thing is, as a developer of the technology or user of AI, to not feel powerless, to not feel like this is the way it is, this is the way the world is, this is the way the technology is. There are always ways for you to shape it. And even as a user, if technology is built the way it should be, you should be able to build the brain of your companion AI that’s that’s helping you pick concert tickets or connect with your friends or whatever it is, find products. You should have control over that algorithm. It’s basically bringing the algorithm into your own personal control, versus in the control of some entity. So there’s always some level of control I think we have, and we have to keep that in mind.

[00:20:42] Elizabeth Wood: Of course, Sheldon also shared some specific ways business leaders and creative teams can approach these changes to drive positive outcomes.

[00:20:50] Sheldon Pacotti: From a design and strategy standpoint, the key is always, what is the user value? And this is a pretty trivial thing to say, in a way, user centered design has been the mantra for a long time, but that’s really the essence of anything we’re building. We’re building it for society. We’re building it for people, and those people are called customers, oftentimes. But from a design and strategy standpoint, that is the perspective that is easy to lose once you’re in the trenches building something and or selling something and making money. I think the next quarter century is going to be very dramatic. My side project now in technology is a application that uses AI to predict the future. So you go and you ask it questions, and then it generates a prediction, and you can save it to a database to kind of build up a view of the future. So I’ve been testing it. It’s still very young, and I’ve been struck by how many pretty dramatic milestones are popping up in the 2030s and we don’t necessarily trust that the AI is accurate on all this stuff, but things like the first automated mind in 2030 or the first human brain-computer interface in 2035 medical tech like cancer being cured around that time, longevity tech, and I think a lot of things are going to come together all at once that are going to change the way we have to view society into the social contract. Just all the automation is going to be happening with AI. And I do think it’s going to be just like the Industrial Revolution, which we shouldn’t think of in terms of a single moment where suddenly people were turned into factory workers. It’s really a moment where a lot of people have to leave what they’re doing a big wave of replacement of workers. And I’m thinking of coders in particular, since I’m able to code as 25 developers right now, myself, and then a gradual reassessment of society.

[00:22:24] Sheldon Pacotti: There will be other types of jobs to do, all the things that naysayers of the doom and gloom say, all that will be true that’ll come. It’ll take about 20 years, and so I think there will be huge social shifts that’ll happen, and we’ll have to be very open minded in terms of what we think about the role of government, the role of people in society and how we get along with each other.

[00:22:43] Elizabeth Wood: That’s our show. The Design Mind frogcast was brought to you by frog, a leading global creative consultancy that is part of Capgemini Invent.

[00:22:51] Elizabeth Wood: We really want to thank Sheldon Pacotti, Technology Director for frog in North America for his insights on AI, meaningful connections and the power of the parasocial. For more on the co-evolution between humans and AI, be sure to check our show notes for a link to download frog’s new Futurescape report ‘Artificial Realities,’ complete with 12 predictions and provocations about where our relationship with technology may be heading.

[00:23:15] Elizabeth Wood: We also want to thank you, dear listener. If you like what you heard, tell your friends. Rate and review to help others find us on Apple Podcasts and  Spotify . And be sure to follow us wherever you listen to podcasts. Find lots more to think about from our global frog team at frog.co/designmind. That’s frog.co. Follow frog on X at @frogdesign and @frog_design on Instagram. And if you have any thoughts about the show, we’d love to hear from you. Reach out at frog.co/contact. Thanks for listening. Now go make your mark.

Authors
Sheldon Pacotti
Technology Director
Sheldon Pacotti
Sheldon Pacotti
Technology Director

Sheldon is a Technology Director at frog in Austin. Having studied math and English at MIT and Harvard, Sheldon enjoys cross-disciplinary creative projects. He builds award-winning software, writes futurist fiction, creates software architectures for businesses and writes about technology.

Elizabeth Wood
Host, Design Mind frogcast & Editorial Director, frog Global Marketing
Elizabeth Wood
Elizabeth Wood
Host, Design Mind frogcast & Editorial Director, frog Global Marketing

Elizabeth tells design stories for frog. She first joined the New York studio in 2011, working on multidisciplinary teams to design award-winning products and services. Today, Elizabeth works out of the London studio on the global frog marketing team, leading editorial content.

She has written and edited hundreds of articles about design and technology, and has given talks on the role of content in a weird, digital world. Her work has been published in The Content Strategist, UNDO-Ordinary magazine and the book Alone Together: Tales of Sisterhood and Solitude in Latin America (Bogotá International Press).

Previously, Elizabeth was Communications Manager for UN OCHA’s Centre for Humanitarian Data in The Hague. She is a graduate of the Master’s Programme for Creative Writing at Birkbeck College, University of London.

Audio Production bySteven Strange
Cookies settings were saved successfully!