Monthly Archives: March 2016
FBI Resists Call To Reveal Tor Hacking Secrets
The FBI Lost This Round Against Apple, But Aims To Win The War
Trident nuclear system to be updated to protect against hacking attacks
It’s a tough enough job protecting your home computer, or your business network, against the rising threat of malware and determined hackers… imagine if you were responsible for the security of Britain’s nuclear deterrent?
The post Trident nuclear system to be updated to protect against hacking attacks appeared first on We Live Security.
![]()
Meet Remaiten – a Linux bot on steroids targeting routers and potentially other IoT devices
ESET researchers are actively monitoring malware that targets embedded systems such as routers, gateways and wireless access points. We call this new threat Linux/Remaiten.
The post Meet Remaiten – a Linux bot on steroids targeting routers and potentially other IoT devices appeared first on We Live Security.
![]()
How to Disable Windows 10 Upgrade (Forever) With Just One Click
If you are a Windows 7 or Windows 8.1 user, who don’t want to upgrade to Windows 10 now or anytime soon, you might be sick of Microsoft constantly pestering you to upgrade your OS.
Aren’t you?
With its goal to deploy Windows 10 on over 1 Billion devices worldwide, Microsoft is becoming more aggressive to convince Windows 7 and 8.1 users to upgrade to its newest operating system, and it is
![]()
CVE-2015-8836 (fedora, fuseiso)
Integer overflow in the isofs_real_read_zf function in isofs.c in FuseISO 20070708 might allow remote attackers to cause a denial of service (application crash) or possibly have unspecified other impact via a large ZF block size in an ISO file, leading to a heap-based buffer overflow.
CVE-2015-8837 (fedora, fuseiso)
Stack-based buffer overflow in the isofs_real_readdir function in isofs.c in FuseISO 20070708 allows remote attackers to cause a denial of service (application crash) or possibly execute arbitrary code via a long pathname in an ISO file.
Notice of Service Outage and followup LON1/UKFacility
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 == What happened == On Wednesday February 24th, at 6pm UTC time, the DC hosting some of the CentOS equipments used for various roles had suffered from multiple electricity power outages. The facility was completely dark for just under 2 hrs, and we were able to start recovering services by 8pm UTC. By midnight we had most services restored, by 2:00AM UTC Feb 25th we had all services restored. That meant that the machines in those racks were running on batteries (ups in the racks) but finally went down in an uncontrolled way due to lack ot communication with that UPS. Subsequent on Monday March 14th, we suffered another power outage in the racks, this time due to a overload on the rack power circuits. == Services that were impacted == - severity critical : mirrorlist.centos.org node (IPv6) went down (while multiple mirrorlist.centos.org nodes for IPv4 nodes were still online). That means that machines with only IPV6 connectivity couldn't get yum to work to retrieve the list of nearest mirrors. - severity medium : Our main buildservices queue management services were down; note: this did not impact our ability to build, test and deliver updates. - severity medium : www.centos.org and www.centos.org/forums weren't reachable through IPv6 : at the moment, those services are natively reachable through IPv4, but proxied through nodes in that DC for IPv6 users. Most tested browsers were falling back to IPv4 during that period - severity medium : CentOS DevCloud (https://wiki.centos.org/DevCloud) : that means that CentOS Developers weren't able to instantiate new CentOS test VMs for their work, but also weren't able to reach the existing ones. - severity low : several publicly facing small services like http://planet.centos.org , http://seven.centos.org (not critical and could be restored quickly to other VMs elsewhere) - severity low : the server leading the armv7hl builds for the Plague build farm was also offline, meaning no armhfp build during that timeframe (but not updates were to be built, so mitigated issue) = Followup actions and notes Over the years, the baseline recovery model we've used and tried to enforce is one of 'restore in place', take a downtime hit if needed - and ensure we have service continuity for the user facing components ( the mirrorlist service, the centos update and content distribution services). For other resources, like the main website etc, we ensure there are good backups available in multiple places, usable to restore services should there be a need. This model has worked well for us over the years, and we've had very little, if any, service outages that had a user impact. The restore in place/restore outside HA also meant we were able to better utilise the exclusively sponsored machines we rely on. However, as the project grows, with a lot more infrastructure being consolidated into a few locations for non CDN services, our exposure to service downtime has dramatically increased. Its clear that we need to expand the scope of where we backup to, how we backup, how we anticipate failure and our ability to restore services in a timely manner should there be facilities outages. In the coming weeks, we are going to undertake a deep dive into our Infrastructure design and delivery and try to first come up with a consolidated set of risks we need to manage against, and then work towards reducing the risk, spreading the availability as needed. Our backend storage platform for the DevCloud and persistent storage for other nodes in the facility is run from a distributed, replicated Gluster setup. Inspite of the sudden loss of power, in a production environment with hundreds of running VMs and dozens of running data jobs, we were able to trivially recover our entire data set with minimum data loss. Some of the running VMs inside the DevCloud did see local filesystem issues, but we dont think that was a backing storage issue. This event has dramatically increased out confidence in the gluster technology stack and we will certainly be looking at extending deployments for it internally. == Comments about hosting facility == Their Status post about thisLondon Power OutageWe have multiple racks at this facility, and have a long standing relationship with them going back to late Summer 2012. Over this period we have had a near perfect uptime record for our equipment there. And above all we have been consistently impressed with the speed of and the knowledgeable support we've recieved at the DC. In many cases, how the facility reacts to outage defines the real service value - and in this case, we can only commend the fantastic support we had through the outage hours. We do however feel there could be better monitoring and reporting of some of the facilities information and will be working with them to improve in those regards. Fabian Arrotin and Karanbir Singh The CentOS Project -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iQEcBAEBAgAGBQJW+6mPAAoJEI3Oi2Mx7xbtHo8IAI+RVIDjGwJOzgJ5Ry7mHwLe Zc+aBUQklDk5oRaDk7QZHsaGp1lclNsutBk3YujNlXwMC4hUKdPwkTVuX50usQ7s kd7qF1BlElNyfMPfFJGwchIQBDOZqZxkZP4uOrvQUnIZUYfyx6NnPnGS0uatBdnw hBJ6TbgP6i50h7U0fNWjHU2I8xe0zsx1jVrvNngDMlQcIHC0d1KMtpOgSMR5f9Bn bLwghfD4/yPyqJP1sc+021ANk1+a7uXs4KKG3MXpMlFyvYmv2ict0Q/sDtz0jzCx kbRgDGm/GF1TUUENciESkHPKy3kLWA1oCicOkiEhzNz2YwFQNdNpi9PqWEK/F5Q= =bDIN -----END PGP SIGNATURE-----
The Internet of Things: Pacemakers

Fit-bracelets, smart-watches, and other wearable technology have joined the “Internet of Things”, everyday objects that collect and exchange information (think: vehicles, smart thermostat systems, and any other device with online capabilities). But did you know that there are much more advanced, health-monitoring, devices out there?
The high-tech pacemakers made today have a ton of benefits, especially for patients who require constant checks and intensive control of their health. These machines have connections that allow them to exchange information with the hospital staff and doctors, as well as the machine’s vendor. Although the pacemakers are not always active, these connections are used to configure and set the parameters of the devices, to remotely monitor its activity, and to transmit the data to its carrier. So, what could be the downside?
Can a pacemaker be hacked?
Well, with any connected device we need to consider if and how it may be hacked. Some researchers and ethical hackers have begun to work in this field to find potential vulnerabilities, but it hasn’t been easy. Manufacturers do not want to give details on the design nor on the specifications of the running software, making it difficult to follow through with research.
So what do we know so far? In 2008, a team of researchers from Archimedes Center for Medical Device Safety at the University of Michigan in the United States confirmed that these pacemakers can be hacked, making it possible to extract personal information from devices or modify its configuration, further putting the patient’s life in jeopardy.
It was rumored that a well-known hacker named Barnaby Jack developed software to hack pacemakers, making it possible to kill anyone wearing one (no matter the distance). He died shortly before he could prove it at the Black Hat conference in Las Vegas. If there is a possible way to control the pacemaker through an internet connection, regardless of distance, there is still no published research that confirms or disproves it.
The most recent research has been done by PhD research scientist and security expert Marie Moe. She has embarked on a new project to analyze the risks and weaknesses of these devices (pacemakers and other wearable technology in medicine) with the help of other professionals in the sector. Moe became very involved in the project after realizing the risks of her own pacemaker.
The aim of her project is to prove that these products are not always safe for patients, regardless of constant development. Moe hopes her research will help prevent future attacks and allow manufacturers to fix any possible security errors on their devices. Recently, the FDA has warned of vulnerabilities found in drug injection pumps, which administer controlled amounts of medicine at certain rates to patients. The cracks in its system allow for unauthorized firmware updates; in theory, a hacker could alter the software and configure the machine however they want, even if that means setting the drug doses to lethal levels.

Keep in mind:
Information is free, protecting yourself is cheap, but no one can afford to lose a loved one because of a damaged device.
The post The Internet of Things: Pacemakers appeared first on Panda Security Mediacenter.
