At Salesforce’s World tour event this week at ExcelLondon, the company announced a host of new tools for its Agentforce platform – arriving alongside a quietly positive earnings report for Q3 in 2025.
Salesforce has revealed an impressive revenue of $10.3 billion this quarter – a 9% year-on-year increase. This has prompted a raise in the full-year revenue guidance to $41.5 billion. Unfortunately, this hasn’t translated into investor confidence in the way perhaps Salesforce might have hoped for – with a 29% drop in stock so far in 2025.
A core part of Salesforce’s ongoing strategy is Agentforce, the organization’s agentic AI wing which enables customers to deploy digital assistant-like models to interact with customers at scale. The latest offering is the launch of Agentforce 360 for the design and deployment of AI Agents for enterprise.
“It is an all new platform that connects your customers, your employees, your operations and your agents in a whole new way. We are transforming the enterprise just as we transformed CRM and it’s all made possible thanks to a new reimagined Agent Force 360,” explains Zahra Bahrololoumi, UKI CEO at Salesforce in her keynote.
TechRadar Pro sat down with Executive Vice President and General Manager of Agentforce Marketing, Stephen Hammond to find out more.
Agentforce Marketing
“For several decades companies have all been trying to find ways to create a personalized relationship with their customers at scale,” Hammond explains.
“But the challenge is that a lot of that work to determine what would be the right kind of offer or message to put in front of someone, was basically guessed off of propensity modeling or machine scoring. There really wasn’t a strong way for the individual to share what their interests were.”
Hammond continues to explain that organizations are often depending entirely on click and search patterns to derive a customer’s interests (archaic, I know). But, a new future is possible, he explains;
“With [AI] agents, they can tell you directly what they’re looking for. Then we can have a response back that’s tied back into their profile, product offerings, marketing campaigns, and anything the company is running, even knowledge base as well – and then you can have an educated response back to you know meaningful customer relationships.”
This is, of course, where Agentforce 360 and data 360 come in. Existing customers now get access to this new technology that they can introduce to their platforms – which is now supported by data 360 and ‘intelligent context’.
‘The big message really is that with the addition of agents to marketing, brands can create a connection with their customers at a scale that wasn’t possible before,” Hammond continues.
It’s all designed to make the experience as frictionless as possible for both enterprises and consumers. Virtual AI assistants, much like the customer service ‘bots’ we all know and love (!) can be built into a site – but with more autonomy in directing the customer.
A high stakes case study
This means we’ll probably start seeing a whole lot more customer service chatbots, even in industries you might not expect. In fact, Thames Valley police are currently trialing an AI Assistant ‘Bobbi’, for non-emergency questions that the force spends valuable time fielding.
It’s in the very, very early stages – having only been live for 8 days at the time we spoke to them. Simon Dodds, Chief Superintendent & Head of Contact Management for Thames Valley Police explained that they’ve seen very positive results since its rollout.
“Bobby’s already helping out with [queries, such as}; ‘I found a phone, what do I do? I want to change my pay address, how do I go about it? And a parcel’s been taken off my doorstep.’ Dodds explains.
This raises questions, for me, about the accountability structure. I asked Dodds about the eventuality that the LLM makes a mistake; fails to recognize a dangerous situation, offers incorrect advice to a user, or hallucinates (in a way we’re all familiar with AI doing). After all, proximity to the emergency services means the potential for high-stakes encounters.
‘It’s [about] making sure that we’ve tested it – and we’ve put it through some quite significant testing. But I do say that our human operators will make mistakes, so Bobbie might make some mistakes and that’s why we’ve got to make sure that we stay on top of the learning.’
This doesn’t, in my opinion, offer much insight into who is accountable if the AI does get it wrong – but the use case is clearly an interesting one that will develop as it is used more frequently.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.











Add Comment