Connect with us

Technology

HP rolls out AI-powered ‘virtual agent’ to solve customer queries

Published

on

Share this:

HP has started offering customers a “virtual agent” service on its customer support site. The agent can handle common queries without human intervention, letting customers get help after the support teams go home.

While a customer service chatbot is not the newest of ideas, the HP agent learns “independently” whenever it completes a new chat with a user. This allows it to add it to its “core knowledge” of 50,000 pages of HP product information. As more users engage with the bot, it can construct additional help and guidance to answer future queries with more precision.

Wait. What’s a chatbot? 

A chatbot is a computer program designed to simulate conversation with human users. They are often powered by machine learning or AI. Chatbots can be deployed over text message, as pop-ups on websites or via messaging apps, like Facebook or WhatsApp.

The chatbot market is expected to grow in value from $703 million in 2016 to $3,172 million by 2021, according to MarketsandMarkets. New research from Juniper expects chatbots to be responsible for over $8 billion a year in cost savings for organizations by 2022.

HP Bot Is Friendly

HP’s friendly and conversational bot is meant to provide customers with a faster self-service alternative to waiting for a human support employee to become available. The agent appears at the bottom-right of HP support webpages, using a similar presentation to the live chat popups on many other websites.

The digital agent is capable of automatically detecting spelling mistakes and interpreting the intended meaning. It parses the customer’s query to understand what they’re asking, before searching for the answer in its catalogue of support documents. If it’s unable to resolve the problem, it’ll automatically hand over to a human operator.

HP Vice President and head of global support centres LaChelle Porter-Ainer explained in a statement said the translation and spellcheck capabilities are the most important in letting the bot help a broad range of customers. In one chat, the agent automatically converted the phrase “noat book” in a customer query into “notebook.”

“A customer can actually communicate in their own words and the virtual agent can translate to find the intention of that customer’s question – and get that customer a response,” Porter-Ainer said. “In other words, you just type as you would normally talk.”

Increasing efficiency

HP said the agent’s already cutting down customer waiting times, however, it has not released details of those efficiencies. With the bot able to takeover handling of basic queries, staff will be freed up to focus on more complex issues.

The virtual agent is built with Microsoft-developed AI technology that was first piloted on the company’s campus. When Porter-Ainer visited Microsoft, the bot’s engineers approached her about deploying it to HP’s support centres. Although the bot’s still technically in an investigative phase, Porter-Ainer said it’s already meeting its design requirements.

Share this:

Leadership

Engineers need to use tech to make humans more powerful

Published

on

AI
Share this:

James Heppelmann, CEO of PTC, gave the convocation speech to Boston University’s College of Engineering and talks about how engineers can create better machines and tools for humans, rather than just focusing on robots that put humans out of work.

In his convocation speech, Heppelmann focused on the importance of better connecting humans with digital tools and creating tools and machines that don’t just aim to replace humans, but creating machines that will aid humans in their understanding of the digital world.

One of the ways to do this, Heppelmann said, is using AR. In his view, AR will help to alleviate some of the problems caused by the great divide created by automation — where people have been split into two camps: the “haves” and the “have-nots.” The “haves” are the ones who are benefitting from, understanding and creating automation, the “have-nots” are those who are being replaced. Heppelmann said that this imbalance creates an image problem for the tech industry.

 


He said that there needs to be a stronger focus on connecting physical, digital and human capabilities because “humans have innovation and creativity” and future engineers and tech industry professionals need to create “new ways to pass digital information onto humans.”
He describes AR as “augmenting god-given human capabilities with a technology overlay,” like one might see in a hearing aid or smart glasses. By giving humans this overlay of digital information, AR becomes “a great equalizer [that] allows people to become smart and connected.”
An example of this would be giving employees in a factory a pair of smart glasses to help the human employee with their productivity.

Heppelmann said that engineers have a responsibility to “elevate [their] focus higher than productivity and cost savings” and spoke about the concept of “the societal engineer,” which is an engineer “who uses digital technology to make humans more powerful.”

“The societal engineer combines quantitative and creative problem solving skills with the ability to communicate effectively with systems-level thinking and global awareness with a passion for innovation and awareness of public policy and a social consciousness and an appreciation for the need to improve the quality of life while creating jobs and economic opportunities.”

Heppelmann ends his convocation speech by asking engineers to take this responsibility seriously and “help create a safer, more sustainable, healthier more productive world with enough food and water and opportunity for all.”

Share this:
Continue Reading

Talent

Samsung set to open AI research lab in Cambridge

Published

on

Samsung AI
Share this:

Tech giant Samsung is opening an AI research lab in Cambridge. The move to do this has been welcomed by British Prime Minister Theresa May, but there’s concern over a mass funneling of graduates out of academic AI research.

This centre joins Samsung’s other AI centres in Moscow and Toronto. The move to build a research lab in Britain, specifically for AI, comes as no surprise following a recent announcement by Prime Minister May’s government.

U.K. spurs AI research

The U.K. government recently announced a USD$400 million investment in AI from corporations and investment firms based in and out of the U.K. In addition, a report from the House of Lords Artificial Intelligence Committee states that while the U.K. can’t outspend leaders like China, they can still become leaders in AI.

BBC reported that the new centre will be lead by professor Andrew Blake, formerly of Microsoft’s research lab in Cambridge and the new Samsung AI lab “could recruit as many as 150 scientists.”

The brain drain

According to the BBC, there’s concern over a funneling of graduates in AI research out of academics and into private sector work:

“A recent study by recruitment specialists Odgers Berndtson found just 225 students in the country were doing post-graduate technology research in specialist areas including AI and machine learning. “In the US, PhD qualified experts can command packages of $300,000 [£223,000]. And in the UK, whilst not yet at that level, salaries are spiralling,” said Mike Drew, head of technology at the headhunting company. A large part of the problem is that industry is picking university departments clean of their talent. A distinguished academic in the AI field confirmed this to me – he said anyone who had done post-graduate research in machine learning could ‘name their price.'”

This isn’t an isolated situation, the same concern was raised when Facebook decided to open new AI labs in Seattle and Pittsburgh, with professors, scholars and researchers from local universities worrying about the future of academic AI research when so many graduates leave for corporate — and greener — pastures.

Share this:
Continue Reading

Technology

The future of AI depends on who’s at the table

Published

on

Artificial Intelligence
Share this:

A new report from Canada’s Brookfield Institute for Innovation and Entrepreneurship about AI’s implications for policy makers found that, to be successful in implementing AI in government, a diversity in conversation must happen.

Conversations about AI aren’t just limited to government, or to Canada. Countries all over the world are tossing their hats in the ring to figure out how to best, and most seamlessly integrate AI into the public realm.

The report, “The AI Shift: Implications for policymakers”, outlines that there’s a need for what Brookfield calls ”deliberate conversation” and that it needs to happen “amongst policymakers, technologists, social scientists and broader communities that will be impacted by a shift toward a prediction-centred society.”

The institute also observed that a further exploration of what will happen when AI is used in government, and a closer look at the decision-making process behind such AI, is needed.

Deliberate conversation

Imogen Parker, the head of justice, rights and digital society for the Nuffield Foundation, a charitable trust that funds research and student programming in the UK, outlined in her piece for TechUK what is meant by “deliberate conversation.”

Parker writers that, as the UK has announced that they want to be a leading force in ethics for technology and data use, they need to have a “diversity of voices” looking into risks and potential outcomes of the use and employment of AI in the public sphere.

Brookfield has released a briefer on AI and the basic terminology associated, it includes a helpful section that explains the ethical implications.

“Due to the increasing reliance on and trust in automated systems in contexts that may require them to make moral decisions,” reads the document. “Users should consider whether the values embedded in the code reflect their own.”

 

Giving values

The idea of a value set in a machine not reflecting a person’s values — or reflecting, depending on who the person is — is a topic of ongoing discussion. At UC Berkeley, professor Anca Dragan is working on developing algorithms for human-robot interaction to ensure that conflicts between humans and robots are avoided by teaching robots to express their intentions and capabilities. Research like hers is crucial to the ongoing and ever-evolving field of AI because there have already been conflicts between humans and AI, like the self-driving car that killed a woman in Arizona.

 

The conversation around AI will determine the future we build with it. And if the report from Brookfield is correct, more deliberate discussion needs to happen — and soon.

Share this:
Continue Reading

Subscribe

Herding cats, weekly

* indicates required







Featured