Multiple persistent Cross-Site Scripting vulnerabilities in osTicket

Posted by Securify B.V. on Feb 28

————————————————————————
Multiple persistent Cross-Site Scripting vulnerabilities in osTicket
————————————————————————
Han Sahin, July 2016

————————————————————————
Abstract
————————————————————————
Two persistent Cross-Site Scripting vulnerabilities have…

CVE-2016-8715

An exploitable heap corruption vulnerability exists in the loadTrailer functionality of Iceni Argus version 6.6.05. A specially crafted PDF file can cause a heap corruption resulting in arbitrary code execution. An attacker can send/provide a malicious PDF file to trigger this vulnerability.

CVE-2016-8389

An exploitable integer-overflow vulnerability exists within Iceni Argus. When it attempts to convert a malformed PDF to XML, it will attempt to convert each character from a font into a polygon and then attempt to rasterize these shapes. As the application attempts to iterate through the rows and initializing the polygon shape in the buffer, it will write outside of the bounds of said buffer. This can lead to code execution under the context of the account running it.

CVE-2016-8388

An exploitable arbitrary heap-overwrite vulnerability exists within Iceni Argus. When it attempts to convert a malformed PDF to XML, it will explicitly trust an index within the specific font object and use it to write the font’s name to a single object within an array of objects.

A Smartwach Social Coach? New Tech Can Read Your Emotions

http://www.pandasecurity.com/mediacenter/src/uploads/2017/02/pandasecurity-MC-android-wear-3.jpg

Technology gets a bad reputation at times. It’s supposed to connect us, but really, it drives us apart. It’s making us less in touch with the world around us and less inclined to deal with emotional issues.

That may be a very one-sided view of things, but it’s hard to deny that people don’t hide behind their brightly lit screens on a daily basis.

Introducing MIT’s wearable AI system app, a piece of software designed to make people more in touch with their emotions.

How does it do this? Well, by putting them on a screen right in front of you, that’s how. The concept almost feels tailored to not allowing one to hide from their feelings by acquiring a glazed expression and burying their face into their device.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Institute of Medical Engineering and Science (IMES) have recently come up with the idea for the tech device.

How Will It Work?

The device wasn’t designed specifically to prevent people using their devices to hide from their emotions, but rather to help people who may do so compulsively because of an underlying psychological issue.

The tech is based on the principle that human communication goes far beyond being purely verbal. People are constantly sending out signals through other means, like mannerisms, voice intonation and eye contact. These non-verbal signals can be difficult to read though for people with anxiety or for those who have developmental disorders such as Asperger’s syndrome.

This is what lead researchers at MIT to develop software that could capture audio data of a person speaking and analyze the speaker’s tone, pitch, energy, and vocabulary.

Imagine if, at the end of a conversation, you could rewind it and see the moments when the people around you felt the most anxious,” says graduate student Tuka Alhanai, who co-authored a paper on the subject with PhD candidate Mohammad Ghassemi. “Our work is a step in this direction, suggesting that we may not be that far away from a world where people can have an AI social coach right in their pocket.

According to MIT News, the students captured 31 different conversations of several minutes each before training two algorithms on the data. After analyzing the conversations, one algorithm classified them as either happy or sad, while the second labeled five-second blocks of the conversations as either positive, negative or neutral.

The model is 7.5 per cent more accurate than other existing approaches, however, it is not yet reliable enough to be used as part of a handheld social coaching device. According to Alhanai, this is very much the goal. To make this possible though, they will have to collect data on a much larger scale.

The model is 7.5 per cent more accurate than other existing approaches, however, it is not yet reliable enough to be used as part of a handheld social coaching device. According to Alhanai, this is very much the goal. To make this possible though, they will have to collect data on a much larger scale.

Cybersecurity Implications?

There is a slightly eerie implication to having our emotions read by an artificial intelligence. It might evoke images of HAL going haywire after lip-reading the protagonist’s plans to shut him down in 2001: A Space Odyssey.

While the tech obviously isn’t on the verge of allowing an AI to hatch a murderous plan, the team have urged caution in the way the system is used in the future.

The algorithm is run locally on the user’s device in order to protect personal information. Alhanai also emphasizes that a consumer version would have to set out clear protocols for getting consent from people involved in the conversations.

The thought of this type of technology being used for third-party data gathering and targeted ads is an uncomfortable one. Despite this, we can see the tech forming an important part in the future of wearables and AI. A huge technological step in a similar direction could also see lie detection playing a role in data security, something that could even be integrated into the security of a futuristic smart home.

MIT’s wearable emotion-reading technology is an interesting step towards integrating technology into the outside world. Augmented Reality companies like Magic Leap are promising a future of enhanced reality, projecting images seamlessly over the real world instead of cutting it out with virtual images. MIT’s new tech can, in one particular respect, be seen very much in the same vein.

Our tech will arguably be enhancing our emotional lives rather than dulling them.

Björn Schuller, professor and chair of Complex and Intelligent Systems at the University of Passau in Germany, seems to share this sentiment. Though he wasn’t involved in the project, he is fascinated about where this step could lead us:

“Technology could soon feel much more emotionally intelligent, or even ‘emotional’ itself.”

We’ll be keeping our eye on this new tech; a tantalizing step towards making technology form a seamless part of our lives instead of distracting us from things that are important in life.

The post A Smartwach Social Coach? New Tech Can Read Your Emotions appeared first on Panda Security Mediacenter.

CVE-2017-6189-Amazon Kindle for Windows

Posted by Nitesh Shilpkar on Feb 28

Amazon kindle for windows suffers from a DLL hijacking issue.

Mitre has issued CVE-2017-6189 for this issue.

The issue is vendor confirmed and Kindle 1.19 fixes this issue.

Proof of concept/demonstration:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

1. Create a malicious dll file and save it in your “Downloads” directory.

2. Download “Kindle Setup” and save it in your “Downloads” directory.

3. Execute “Kindle…

Advisory X41-2017-001: Multiple Vulnerabilities in X.org

Posted by X41 D-Sec GmbH Advisories on Feb 28

X41 D-Sec GmbH Security Advisory: X41-2017-001

Multiple Vulnerabilities in X.org
=================================

Overview
——–
Vendor: X.org/Freedesktop.org
Vendor URL: https://www.x.org/wiki/
Credit: X41 D-Sec GmbH, Eric Sesterhenn
Advisory-URL: https://www.x41-dsec.de/lab/advisories/x41-2017-001-xorg/
Status: Public

Timing attack against MIT Cookie
================================
Vulnerability Type: Other
Affected Products: Xorg…