Connect with us

Technology

IoT to boost B2B data and insights in 2018

Published

on

IoT
Share this:

Connected technology is the lifeblood of the Internet of Things (IoT), or so say the new predictions from market data company Forrester Research.

The company has put together a list of IoT predictions for 2018, highlighting the increasing impact that connected technology is having upon businesses. The new findings come in a report titled “Predictions 2018: IoT Moves From Experimentation To Business Scale.”

According to a recent McKinsey report, the economic benefits of the IoT are expected to reach between $3.9 trillion and $11.1 trillion within the next decade. The effects of the IoT will be across all industries, and industry leaders will need to understand where their resources can be properly allocated in order to gain the benefits of this transformative tech.

IoT platforms to get more specific

Forrester predicts that IoT platform offerings will start to specialize in “design” and “operate” scenarios.

“Design” use, Smart2Zero interprets, refers to case scenarios that involve creating connected offerings or with environments to engage with customers. In contrast, “operate” use cases are set to enhance processes, creating new efficiencies or enhancing customer experiences.

In line with a growing number of commentators, the Forrester report indicates that there will be a shift away from platforms that are general, like the Microsoft Azure IoT Suite or the General Electric Predix, and towards smaller companies offering specialized IoT services.

The kinds of specific offerings that enterprises could pilot and roll out include voice-based services to consumers and Infrastructure as a Service (IaaS) services.

B2B opportunities and cybersecurity risks

“IoT is re-shaping how businesses are organized, including the roles and responsibilities of individuals — and how they work together” said Christopher Voce, Vice President, Research Director at Forrester.  “Capturing the promise of any of these scenarios requires organizations to collaborate in new ways.”

Forrester’s research predicts there will also be more exchanges of data and insights between firms. This will lead to more B2B opportunities for companies commercializing data analysis.

Of course, as Forrester’s report points out, this new connected technology comes with well-documented risks. Implementing IoT solutions in businesses currently comes with a large amount of security risk. This could lead to an increase of IoT-related cyberattacks next year and beyond.

According to a study by consulting firm Altman Vilandrie & Company, 48 percent of participating companies have suffered at least one IoT security incident. And almost half of the businesses featured in the study with annual revenues over $2 billion estimated the potential cost of an IoT breach at more than $20 million.

Share this:

Leadership

Engineers need to use tech to make humans more powerful

Published

on

AI
Share this:

James Heppelmann, CEO of PTC, gave the convocation speech to Boston University’s College of Engineering and talks about how engineers can create better machines and tools for humans, rather than just focusing on robots that put humans out of work.

In his convocation speech, Heppelmann focused on the importance of better connecting humans with digital tools and creating tools and machines that don’t just aim to replace humans, but creating machines that will aid humans in their understanding of the digital world.

One of the ways to do this, Heppelmann said, is using AR. In his view, AR will help to alleviate some of the problems caused by the great divide created by automation — where people have been split into two camps: the “haves” and the “have-nots.” The “haves” are the ones who are benefitting from, understanding and creating automation, the “have-nots” are those who are being replaced. Heppelmann said that this imbalance creates an image problem for the tech industry.

 


He said that there needs to be a stronger focus on connecting physical, digital and human capabilities because “humans have innovation and creativity” and future engineers and tech industry professionals need to create “new ways to pass digital information onto humans.”
He describes AR as “augmenting god-given human capabilities with a technology overlay,” like one might see in a hearing aid or smart glasses. By giving humans this overlay of digital information, AR becomes “a great equalizer [that] allows people to become smart and connected.”
An example of this would be giving employees in a factory a pair of smart glasses to help the human employee with their productivity.

Heppelmann said that engineers have a responsibility to “elevate [their] focus higher than productivity and cost savings” and spoke about the concept of “the societal engineer,” which is an engineer “who uses digital technology to make humans more powerful.”

“The societal engineer combines quantitative and creative problem solving skills with the ability to communicate effectively with systems-level thinking and global awareness with a passion for innovation and awareness of public policy and a social consciousness and an appreciation for the need to improve the quality of life while creating jobs and economic opportunities.”

Heppelmann ends his convocation speech by asking engineers to take this responsibility seriously and “help create a safer, more sustainable, healthier more productive world with enough food and water and opportunity for all.”

Share this:
Continue Reading

Talent

Samsung set to open AI research lab in Cambridge

Published

on

Samsung AI
Share this:

Tech giant Samsung is opening an AI research lab in Cambridge. The move to do this has been welcomed by British Prime Minister Theresa May, but there’s concern over a mass funneling of graduates out of academic AI research.

This centre joins Samsung’s other AI centres in Moscow and Toronto. The move to build a research lab in Britain, specifically for AI, comes as no surprise following a recent announcement by Prime Minister May’s government.

U.K. spurs AI research

The U.K. government recently announced a USD$400 million investment in AI from corporations and investment firms based in and out of the U.K. In addition, a report from the House of Lords Artificial Intelligence Committee states that while the U.K. can’t outspend leaders like China, they can still become leaders in AI.

BBC reported that the new centre will be lead by professor Andrew Blake, formerly of Microsoft’s research lab in Cambridge and the new Samsung AI lab “could recruit as many as 150 scientists.”

The brain drain

According to the BBC, there’s concern over a funneling of graduates in AI research out of academics and into private sector work:

“A recent study by recruitment specialists Odgers Berndtson found just 225 students in the country were doing post-graduate technology research in specialist areas including AI and machine learning. “In the US, PhD qualified experts can command packages of $300,000 [£223,000]. And in the UK, whilst not yet at that level, salaries are spiralling,” said Mike Drew, head of technology at the headhunting company. A large part of the problem is that industry is picking university departments clean of their talent. A distinguished academic in the AI field confirmed this to me – he said anyone who had done post-graduate research in machine learning could ‘name their price.'”

This isn’t an isolated situation, the same concern was raised when Facebook decided to open new AI labs in Seattle and Pittsburgh, with professors, scholars and researchers from local universities worrying about the future of academic AI research when so many graduates leave for corporate — and greener — pastures.

Share this:
Continue Reading

Technology

The future of AI depends on who’s at the table

Published

on

Artificial Intelligence
Share this:

A new report from Canada’s Brookfield Institute for Innovation and Entrepreneurship about AI’s implications for policy makers found that, to be successful in implementing AI in government, a diversity in conversation must happen.

Conversations about AI aren’t just limited to government, or to Canada. Countries all over the world are tossing their hats in the ring to figure out how to best, and most seamlessly integrate AI into the public realm.

The report, “The AI Shift: Implications for policymakers”, outlines that there’s a need for what Brookfield calls ”deliberate conversation” and that it needs to happen “amongst policymakers, technologists, social scientists and broader communities that will be impacted by a shift toward a prediction-centred society.”

The institute also observed that a further exploration of what will happen when AI is used in government, and a closer look at the decision-making process behind such AI, is needed.

Deliberate conversation

Imogen Parker, the head of justice, rights and digital society for the Nuffield Foundation, a charitable trust that funds research and student programming in the UK, outlined in her piece for TechUK what is meant by “deliberate conversation.”

Parker writers that, as the UK has announced that they want to be a leading force in ethics for technology and data use, they need to have a “diversity of voices” looking into risks and potential outcomes of the use and employment of AI in the public sphere.

Brookfield has released a briefer on AI and the basic terminology associated, it includes a helpful section that explains the ethical implications.

“Due to the increasing reliance on and trust in automated systems in contexts that may require them to make moral decisions,” reads the document. “Users should consider whether the values embedded in the code reflect their own.”

 

Giving values

The idea of a value set in a machine not reflecting a person’s values — or reflecting, depending on who the person is — is a topic of ongoing discussion. At UC Berkeley, professor Anca Dragan is working on developing algorithms for human-robot interaction to ensure that conflicts between humans and robots are avoided by teaching robots to express their intentions and capabilities. Research like hers is crucial to the ongoing and ever-evolving field of AI because there have already been conflicts between humans and AI, like the self-driving car that killed a woman in Arizona.

 

The conversation around AI will determine the future we build with it. And if the report from Brookfield is correct, more deliberate discussion needs to happen — and soon.

Share this:
Continue Reading

Subscribe

Herding cats, weekly

* indicates required







Featured