Building Trust in the Age of AI: How We Can Finally Use Conversations Responsibly
How vCons and SCITT solve the privacy-innovation dilemma
Photo Credit: Fabian Gieske on Unsplash
Let me share something that’s been keeping me up at night. We’re sitting on an incredible opportunity with conversational AI, but we’re also walking a tightrope. Every conversation your business has contains insights that could transform how you operate. Yet each of those conversations also carries the weight of someone’s trust. How do we balance these two realities?
Think about the last time you called customer service. You probably heard that familiar message about your call being recorded for quality and training purposes. What if I told you that the recording might later be used to train an AI system? Would you feel comfortable with that? More importantly, how would the company even keep track of what you consented to, especially if you change your mind later?
This is the puzzle I’ve been working to solve, and I want to walk you through a framework that could change everything about how businesses handle conversational data.
Understanding the Real Challenge
Before we dive into solutions, let’s make sure we’re on the same page about the problem. Imagine you’re running a business that handles thousands of customer interactions daily. These conversations happen across phone calls, video meetings, chat systems, and support tickets. Each one is a goldmine of information about what your customers need, what frustrates them, and how you could serve them better.
But here’s where it gets complex. These conversations don’t just contain business insights. They include personal stories, private information, and human moments that deserve protection. In Europe, GDPR treats voice recordings as biometric data requiring special protection. In California, CCPA gives consumers the right to know what you’re doing with their data and to request its deletion. And these regulations are just the beginning.
The traditional approach has been to err on the side of caution, which often means not using this data at all. But that’s like having a library full of books you’re not allowed to read. There has to be a better way, and that’s precisely what vCons and SCITT provide.
Introducing vCons: A Container Built for Conversations
Let me introduce you to the first piece of the puzzle. A vCon, which stands for Virtualized Conversation, is a specialized container explicitly designed for conversational data. To understand why this matters, let’s think about how conversations are typically stored today.
In most organizations, a single customer interaction might be scattered across multiple systems. The audio recording sits in one database, the transcript in another, the metadata about who participated lives in a CRM system, and any notes or analysis might be in yet another application. It’s like having the pages of a book scattered across different rooms of a house.
A vCon brings all these pieces together into a single, standardized package. But it does something even more critical. It includes a special section for consent and privacy preferences. This means that wherever the conversation goes, the permissions travel with it. Think of it like a passport that travels with a person, clearly stating where they’re allowed to go and what they’re allowed to do.
The beauty of this approach becomes clear when you consider what happens when you need to share conversational data. You may want to send a customer service call to a transcription service or share it with an AI vendor for analysis. With a vCon, you’re not just sharing raw data. You’re sharing a complete package that includes clear instructions about what can and cannot be done with that data.
SCITT: Creating an Unbreakable Chain of Trust
Now, packaging conversations properly is only half the solution. The second piece is SCITT, which stands for Supply Chain Integrity, Transparency, and Trust. I know that’s a mouthful, but stay with me because this is where things get interesting.
Think about how we handle essential documents in the physical world. When you buy a house, every significant document gets notarized. That notary stamp creates a permanent record that the document existed at a particular time and was acknowledged by specific parties. SCITT does the same thing for digital conversations, but with a crucial difference: the record it creates cannot be altered or deleted.
Here’s how it works in practice. Every time something significant happens to a vCon, that event gets recorded in SCITT. When a customer gives consent, that’s recorded. When the conversation is shared with a third party, that’s recorded. If analysis is performed, that’s recorded too. And if the customer later revokes consent or requests deletion, those actions are also permanently documented.
What makes this powerful is that it creates accountability across entire ecosystems. Let’s say you share a conversation with three different service providers for transcription, sentiment analysis, and AI training. Each of these providers must record their receipt of the data in SCITT. If a customer later requests deletion, you can use these records to ensure everyone who received the data correctly deletes it.
Bringing It All Together: A New Model for Trust
Now let me show you how vCons and SCITT work together to create something transformative. Imagine a customer service call that unfolds like this:
When the call begins, the system creates a vCon that will contain the recording. The customer hears the standard message about call recording and agrees to proceed. But here’s where things diverge from the traditional approach. The system can now present specific consent options. Would the customer allow the recording to be used for quality assurance? What about AI training to improve service? Each permission is recorded in the vCon and logged in SCITT with a timestamp and cryptographic proof.
As the call progresses and eventually ends, the complete conversation is packaged in the vCon along with all the consent information. When the business wants to use this conversation, they don’t have to wonder what was allowed. The permissions are right there in the package, and the SCITT record proves when and how consent was obtained.
But here’s where it gets even more interesting. Let’s say six months later, the customer decides they’re not comfortable with their conversation being used for AI training anymore. In the traditional model, this would be a nightmare. How do you find every copy of the recording? How do you prove you’ve deleted it from all AI training datasets?
With vCons and SCITT, this becomes manageable. The revocation of consent is recorded in SCITT, creating a permanent record of the customer’s wishes. The SCITT ledger shows everyone who received the conversation, so you know exactly who needs to be notified. And when deletion is complete, that too is recorded, providing proof of compliance.
Why This Matters More Than Ever
You might be wondering why we need such an elaborate system. Let’s be careful with data and trust that everyone will do the right thing. The answer lies in understanding three converging trends that make this framework essential.
First, conversational AI is becoming incredibly powerful. The insights we can extract from conversations today would have seemed like science fiction just a few years ago. AI can detect customer sentiment, identify emerging issues, predict churn, and even suggest personalized solutions. But with great power comes great responsibility, and we need frameworks that ensure this power is used ethically.
Second, privacy regulations are becoming stricter and more widespread. What started with GDPR in Europe has inspired similar laws worldwide. California’s CCPA, Brazil’s LGPD, and many others share common themes around consent, transparency, and user control. Businesses need systems that can adapt to this evolving regulatory landscape.
Third, and perhaps most importantly, consumer expectations are changing. People are becoming more aware of how their data is used and more selective about who they trust. Companies that can demonstrate responsible data handling won’t just avoid penalties - they’ll earn customer loyalty and competitive advantage.
Taking the First Steps
If you’re convinced that this approach makes sense for your business, you might be wondering how to get started. The good news is that you don’t need to transform everything overnight.
Start by understanding your current conversation landscape. What types of conversations does your business have? Which ones contain the most valuable insights? Which ones carry the highest privacy risks?
Next, examine your current consent processes. How do you obtain consent today? How is it recorded? What happens when someone wants to revoke consent? Understanding your current state will help you identify where vCons and SCITT can provide the most immediate value.
Consider starting with a pilot program focused on a specific type of conversation. Customer service calls are often a good starting point because they typically already have consent processes and clear business value. As you gain experience with the framework, you can expand to other conversation types.
Remember that this isn’t just a technology implementation. It’s a new way of thinking about conversational data. Involve your legal, privacy, and compliance teams early in the process. Help them understand how vCons and SCITT can make their jobs easier by providing clear documentation and audit trails.
A Personal Reflection on Why This Matters
I’ve spent years working at the intersection of technology and privacy, and I’ve seen how the tension between innovation and protection can paralyze organizations. We’ve created a false choice between using data to improve our businesses and respecting people’s privacy. This framework proves we can do both.
What excites me most about vCons and SCITT is that they’re not just solving today’s problems. They’re creating infrastructure for a future where conversational AI is ubiquitous but still trustworthy. They’re laying the groundwork for businesses to innovate responsibly and for consumers to share their thoughts and experiences without fear.
The conversations happening in your business right now are more than just data. They’re moments of human connection, opportunities to solve problems, and chances to build relationships. By treating them with the respect they deserve while still extracting their valuable insights, we can build businesses that are both more intelligent and more trustworthy.
The path forward isn’t always easy, but it’s clear. We need to move beyond hoping we’re handling conversational data responsibly to proving it. We need to shift from avoiding the use of conversational insights to embracing them within a framework of transparency and consent. And we need to recognize that in the age of AI, trust isn’t just nice to have. It’s a business imperative.
Are you ready to take the first step? The future of responsible conversational AI is being built right now, and your business can be part of shaping it. The only question is whether you’ll lead the change or follow it.
About the Author
Thomas McCarthy-Howe is the Chief Technology Officer at Strolid Inc., where he leads the development of next-generation automotive business development solutions. With over 30 years of experience in communications technology, Thomas is a co-author of the IETF vCon draft specification and holds 15 patents in telecommunications and data management. His work focuses on building scalable, privacy-first systems that unlock business value from conversational data.
Ed: What questions do you have about implementing responsible AI in your organization? Share your thoughts in the comments; we’d love to discuss the challenges you’re facing.