Connect with us

Technology

IAmI taps IBM Cloud to defend enterprise networks against attack

Published

on

Share this:

IAmI Authentications, a B2B cybersecurity provider, has developed a real-time protection system to defend enterprise networks against intrusion attacks using stolen login credentials. IAmI can issue alerts when an ID is being used without authorisation.

Intelligent authentication

IAmI describes its service as a way to “crowdsource, decentralize and tokenize authentication and identity.” The company provides an “intelligent” form of two-factor authentication for the workforce of digital enterprises.

The IAmI app operates similarly to the authentication apps you may already have on your phone. It enforces two-factor authentication using push notifications. When you try to login to a protected company resource, you have to acknowledge the attempt in the app or press “deny” to issue a block. IT administrators can monitor the service to detect incoming threats in real-time.

In a post published to IBM’s blog, IAmI explained that cloud-powered authentication solutions help organisations address data breaches. Many organizations remain unaware of incidents until after the intrusion has occurred. Blocking attackers at the point of an unauthorised login gives IT staff visibility into the threat. It also enables them to actively deploy additional protections.

“Many organizations don’t even know their network security has been compromised until it’s much too late. Once stolen, their private data is impossible to protect,” said IAmI. “[IAmI] empowers users to protect their own login credentials from hackers who would otherwise try to exploit it to gain unauthorized access. Organizations and users can know their login credentials and secure data can no longer be nefariously exploited or breached.”

Cloud security

IAmI’s solution uses IBM Cloud to distribute authentication alerts to enterprise devices. The system can monitor every aspect of an enterprise’s authentication, from end-user logins through to database access attempts. The company’s apps run across all major mobile platforms, including iOS, Android and even the Apple Watch. Admins can monitor intrusion attempts from wherever they are.

The service provides an example of how cloud solutions can improve the cybersecurity posture of enterprises. Although cloud software is often seen as a risk in itself, effective use of breach detection systems can enable firms to mitigate the risks. Stolen credential attacks are common but can be blocked, if the technology is already in-place.

According to IAmI, the average detection rate for major data breaches is 205 days. Use of an authentication app cuts the delay to a matter of seconds when defending against attackers using basic credential theft techniques. Cloud services can also offer behavioural analysis functions to proactively identify irregularities that could be a threat. If a user logs in from an unusual location or a new device, administrators may be alerted first so they can authorize the identity.

Share this:

Leadership

Engineers need to use tech to make humans more powerful

Published

on

AI
Share this:

James Heppelmann, CEO of PTC, gave the convocation speech to Boston University’s College of Engineering and talks about how engineers can create better machines and tools for humans, rather than just focusing on robots that put humans out of work.

In his convocation speech, Heppelmann focused on the importance of better connecting humans with digital tools and creating tools and machines that don’t just aim to replace humans, but creating machines that will aid humans in their understanding of the digital world.

One of the ways to do this, Heppelmann said, is using AR. In his view, AR will help to alleviate some of the problems caused by the great divide created by automation — where people have been split into two camps: the “haves” and the “have-nots.” The “haves” are the ones who are benefitting from, understanding and creating automation, the “have-nots” are those who are being replaced. Heppelmann said that this imbalance creates an image problem for the tech industry.

 


He said that there needs to be a stronger focus on connecting physical, digital and human capabilities because “humans have innovation and creativity” and future engineers and tech industry professionals need to create “new ways to pass digital information onto humans.”
He describes AR as “augmenting god-given human capabilities with a technology overlay,” like one might see in a hearing aid or smart glasses. By giving humans this overlay of digital information, AR becomes “a great equalizer [that] allows people to become smart and connected.”
An example of this would be giving employees in a factory a pair of smart glasses to help the human employee with their productivity.

Heppelmann said that engineers have a responsibility to “elevate [their] focus higher than productivity and cost savings” and spoke about the concept of “the societal engineer,” which is an engineer “who uses digital technology to make humans more powerful.”

“The societal engineer combines quantitative and creative problem solving skills with the ability to communicate effectively with systems-level thinking and global awareness with a passion for innovation and awareness of public policy and a social consciousness and an appreciation for the need to improve the quality of life while creating jobs and economic opportunities.”

Heppelmann ends his convocation speech by asking engineers to take this responsibility seriously and “help create a safer, more sustainable, healthier more productive world with enough food and water and opportunity for all.”

Share this:
Continue Reading

Talent

Samsung set to open AI research lab in Cambridge

Published

on

Samsung AI
Share this:

Tech giant Samsung is opening an AI research lab in Cambridge. The move to do this has been welcomed by British Prime Minister Theresa May, but there’s concern over a mass funneling of graduates out of academic AI research.

This centre joins Samsung’s other AI centres in Moscow and Toronto. The move to build a research lab in Britain, specifically for AI, comes as no surprise following a recent announcement by Prime Minister May’s government.

U.K. spurs AI research

The U.K. government recently announced a USD$400 million investment in AI from corporations and investment firms based in and out of the U.K. In addition, a report from the House of Lords Artificial Intelligence Committee states that while the U.K. can’t outspend leaders like China, they can still become leaders in AI.

BBC reported that the new centre will be lead by professor Andrew Blake, formerly of Microsoft’s research lab in Cambridge and the new Samsung AI lab “could recruit as many as 150 scientists.”

The brain drain

According to the BBC, there’s concern over a funneling of graduates in AI research out of academics and into private sector work:

“A recent study by recruitment specialists Odgers Berndtson found just 225 students in the country were doing post-graduate technology research in specialist areas including AI and machine learning. “In the US, PhD qualified experts can command packages of $300,000 [£223,000]. And in the UK, whilst not yet at that level, salaries are spiralling,” said Mike Drew, head of technology at the headhunting company. A large part of the problem is that industry is picking university departments clean of their talent. A distinguished academic in the AI field confirmed this to me – he said anyone who had done post-graduate research in machine learning could ‘name their price.'”

This isn’t an isolated situation, the same concern was raised when Facebook decided to open new AI labs in Seattle and Pittsburgh, with professors, scholars and researchers from local universities worrying about the future of academic AI research when so many graduates leave for corporate — and greener — pastures.

Share this:
Continue Reading

Technology

The future of AI depends on who’s at the table

Published

on

Artificial Intelligence
Share this:

A new report from Canada’s Brookfield Institute for Innovation and Entrepreneurship about AI’s implications for policy makers found that, to be successful in implementing AI in government, a diversity in conversation must happen.

Conversations about AI aren’t just limited to government, or to Canada. Countries all over the world are tossing their hats in the ring to figure out how to best, and most seamlessly integrate AI into the public realm.

The report, “The AI Shift: Implications for policymakers”, outlines that there’s a need for what Brookfield calls ”deliberate conversation” and that it needs to happen “amongst policymakers, technologists, social scientists and broader communities that will be impacted by a shift toward a prediction-centred society.”

The institute also observed that a further exploration of what will happen when AI is used in government, and a closer look at the decision-making process behind such AI, is needed.

Deliberate conversation

Imogen Parker, the head of justice, rights and digital society for the Nuffield Foundation, a charitable trust that funds research and student programming in the UK, outlined in her piece for TechUK what is meant by “deliberate conversation.”

Parker writers that, as the UK has announced that they want to be a leading force in ethics for technology and data use, they need to have a “diversity of voices” looking into risks and potential outcomes of the use and employment of AI in the public sphere.

Brookfield has released a briefer on AI and the basic terminology associated, it includes a helpful section that explains the ethical implications.

“Due to the increasing reliance on and trust in automated systems in contexts that may require them to make moral decisions,” reads the document. “Users should consider whether the values embedded in the code reflect their own.”

 

Giving values

The idea of a value set in a machine not reflecting a person’s values — or reflecting, depending on who the person is — is a topic of ongoing discussion. At UC Berkeley, professor Anca Dragan is working on developing algorithms for human-robot interaction to ensure that conflicts between humans and robots are avoided by teaching robots to express their intentions and capabilities. Research like hers is crucial to the ongoing and ever-evolving field of AI because there have already been conflicts between humans and AI, like the self-driving car that killed a woman in Arizona.

 

The conversation around AI will determine the future we build with it. And if the report from Brookfield is correct, more deliberate discussion needs to happen — and soon.

Share this:
Continue Reading

Subscribe

Herding cats, weekly

* indicates required







Featured