Category Archives: Antivirus Vendors

Antivirus Vendors

It Did What? The Dirty Secrets About Digital Assistants

Are Siri and Other Digital Assistants Actually a Security Risk?

People started fearing digital assistants before they even became a reality. Before computers were even a household commodity, Stanley Kubrick was terrifying cinemagoers with HAL, 2001: A Space Odyssey’s rogue AI assistant.

Today though, our intelligent personal assistants form an important part of our lives. As AI technology advances they will become even more prevalent.

While the dangers imagined in Sci-fi movies of the 60’s and 70’s are thankfully far from being around the corner, it’s important to look at the real security risks that digital assistants could pose.

Despite being the most popular intelligent personal assistants, Siri and Cortana are not the only iterations of this growing technology on the market. Amazon, for example, now offers it’s Echo device, while Facebook has recently released its own digital assistant called M.

So what are the dangers?

Not to sound too ominous, but IBM has banned the use of Siri for its employees. The rule was set by IBM Chief Technology Officer Jeanette Horan, who cited security concerns.

You know those large license agreements you have to agree to when you first start using a device, the ones most people don’t bother reading?

Well, Apple’s iPhone Software License Agreement, quite vaguely, shows how voice commands are used after being submitted to Siri. “When you use Siri or Dictation, the things you say will be recorded and sent to Apple in order to convert what you say into text.

What’s more, “by using Siri or Dictation, you agree and consent to Apple’s and its subsidiaries’ and agents’ transmission, collection, maintenance, processing, and use of this information, including your voice input and User Data, to provide and improve Siri, Dictation, and other Apple products and services.

Sounds like jargon? The convoluted styles in these agreements often help to gloss over important information that most companies know their user’s will be glancing over at best.

Siri may not literally be watching you, but the fact is that everything you say to her is sent to a big data center in Maiden, North Carolina. IBM’s Horan decided to ban Siri because it could be storing sensitive information for an unspecified amount of time.

If Apple were breached, hackers could intercept that data. And perhaps just as alarmingly, a lot of the data is sent to third party companies. Besides the fact that you’ll receive an onslaught of targeted ads, the more companies this information is sent to the less private it becomes.

This is far from being solely an Apple issue though.

Amazon Echo, A Criminal Witness?

In a case that has seen Amazon largely mirror Apple’s resolve on handing over encrypted data to the FBI, the Amazon Echo may have been a key witness to a murder.

James Andrew Bates is suspected of having killed Victor Collins in his apartment. No one else was present at the scene of the crime, except that is, Alexa, who was being used to stream music throughout the night.

Amazon, much like Apple, have abstained from giving police the data on Alexa, saying it would set an unwanted precedent. This shows though, at the very least, that police in Bentonville, Arkansas, where the crime took place, believe Alexa may be capable of storing sensitive information. So much so, they believe it could incriminate a suspect in a murder case.

Whilst this is obviously an extreme example of a data privacy issue, what implications does it have in a regular home?

The biggest all-round concern for cybersecurity experts is that these devices are constantly programmed to listen. Amazon’s Echo device is called to action by the command “Alexa”. This seems like an obvious vulnerability that could be used by hackers to listen into conversations taking place in the home.

Aside from this, the Echo cannot differentiate between different voices, so anyone who comes into your home potentially has access to every account linked to Alexa.

Other Risks

So, whilst it is yet to have happened, or to have been allowed by any of the big tech companies, lawyers or the police could potentially subpoena sensitive information. This is, of course, if law enforcement gets their way.

If they do, they’ll have the key to a huge amount of information, Apple, Amazon and Google being amongst a growing list of companies that keeps an archive of commands.

The problem, however, goes beyond the mere use of digital assistants. As the use of integrated devices and smart homes increases, more and more devices will have the ability to store potentially sensitive information. A Smart TV, for example could easily have the capability of storing recorded information. Whilst this would seemingly be primed towards targeted ads, there is again the possibility that sensitive information could be stored unbeknownst to its users.

Keep Safe

The obvious advice is easy to uphold, and is one that most people will already be practicing. Don’t say sensitive information, like passwords or credit cards details, out loud. It’s likely to become increasingly difficult to know who (or what’s) listening within your own home.

Meanwhile, whilst operating systems such as iOS do let you manage data collection by changing privacy settings, the only option the Amazon Echo gives you is to unplug the device when not in use. It’s important, therefore, to look at your privacy settings, whatever the device.

So aside from telling us tomorrow’s weather, where the best restaurants are, and the occasional bad joke, digital assistants do pose some real risks to our cybersecurity.

Whilst the technology undoubtedly makes us more seamlessly connected to our tech devices, in turn making our lives easier, it’s important to always take into account the issue of privacy; an issue that tech is increasingly making more tenuous within our own homes, for better or for worse.

The post It Did What? The Dirty Secrets About Digital Assistants appeared first on Panda Security Mediacenter.

Your Android lock pattern can be cracked in just five attempts

 

If you use a lock pattern to secure your Android smartphone, you probably think that’s the perfect way to avoid unwanted intrusions. However, that line you draw with your finger may be a bit too simple. After all, if even Mark Zuckerberg himself used ‘dadada’ for all of his passwords, it is not surprising that your lock pattern may be a simple letter of the alphabet.

Android lock patterns can be easily cracked using a computer vision algorithm.

Relax, you are not the only one. Around 40 percent of Android users prefer lock patterns to PIN codes or text passwords to protect their devices. And they usually go for simple patterns. Most people only use four of the nine available nodes, according to a recent study conducted by the Norwegian University of Science and Technology. Additionally, 44 percent of people start their lock screen pattern from the top left corner of the grid.

Even though creating more complicated patterns may seem like the best solution to make your password harder to guess, a team of researchers has demonstrated that complex patterns are surprisingly easier to crack than simple ones by using an algorithm.

Hackers can steal your lock pattern from a distance

Picture this: You sit at a table in your favorite café, take your smartphone out of your pocket and trace your lock pattern across the phone screen. Meanwhile, an attacker at a nearby table films the movements of your fingers. Within seconds, the software installed on their device will suggest a small number of possible patterns that could be used to unlock your smartphone or tablet.

Researchers from the Lancaster University and the University of Bath in the UK, along with the Northwest University in China, have shown that this type of attack can be carried out successfully by using footage filmed with a video camera and a computer vision algorithm. The researchers evaluated the attack using 120 unique patterns collected from users, and were able to crack 95 percent of patterns within five attempts.

The attack works even without the video footage being able to see any of the on-screen content, and regardless of the size of the screen. The attackers would not even need to be close to the victim, as the team was able to steal information from up to two and a half meters away by filming on a standard smartphone camera, and from nine meters using a more advanced digital SLR camera.

Surprising as it may seem, the team also found that longer patterns are easier to hack, as they help the algorithm to narrow down the possible options. During tests, researchers were able to crack all but one of the patterns categorized as complex, 87.5 percent of median complex patterns, and 60 percent of simple patterns with the first attempt.

Now, if tracing a complex pattern is not a safe alternative, what can you do to protect yourself, especially if you store sensitive data on your smartphone? Using your hand to cover the screen when drawing your lock pattern (just as you do when using an ATM), or reducing your device’s screen color and brightness to confuse the recording camera are some of the recommendations offered by researchers.

The post Your Android lock pattern can be cracked in just five attempts appeared first on Panda Security Mediacenter.

Mac FindZip ransomware decryption tool unzips your encrypted files

Late February 2017, a new type of ransomware for Mac was discovered. This ransomware, called FindZip, infects users by pretending to be a cracked version of commercial applications, such as Adobe Premiere Pro. Once it infects a Mac, it utilizes a ZIP encryption to encrypt documents – the exact same scheme used by the Windows ransomware, Bart, which we decrypted last summer.