Tag Archives: Security

XARA – With This Exploit Hackers Can Steal Your Passwords

Six university researchers discovered high-impact “zero-day” security weaknesses in iOS and Mac, which can be abused by getting a malicious app approved by the Apple app store – something they managed to do without any issues. Through this app they were able to access sensitive data from other apps – with dire consequences. The researchers state that “our sandboxed app successfully retrieved from the system’s keychain the passwords and secret tokens of iCloud, email and all kinds of social networks stored there by the system app Internet Accounts, and bank and Gmail passwords from Google Chrome […]”

It does sound unbelievable, doesn’t it? Just take a look at the below video to see a malicious sandboxes app on OS X steal all private notes in the Evernote app:

Or how about a look at how it is able to steal any websites’ passwords:

According to their research 88.6% of the apps they tested were found to be completely exposed to the XARA attacks. This includes popular apps like Evernote, WeChat, and 1Password: “In our study, we downloaded 1,612 free apps from the MAC App Store. These apps cover all 21 categories of the store, including social networking, finance, business, and others. In each category, we picked up all the free apps when less than 100 of them are there, and top 100 otherwise. Also from the iOS App Store, we collected 200 most popular apps, 40 each from “All Categories”, “Finance”, “Business”, “Social Networking” and “Productivity”, after removing duplications.”

The researcher informed Apple about the issues in October 2014, a fix seems to be still outstanding.

Take a look at the research paper to read all about the issue.

The post XARA – With This Exploit Hackers Can Steal Your Passwords appeared first on Avira Blog.

Beware of phishing scams after the LastPass breach

In a blog post , LastPass revealed that they “discovered and blocked suspicious activity on our network”  and that it found “no evidence that encrypted user vault data was taken”.

LastPass seem to be transparent in sharing information about this security breach.  They have provided what appears to be good technical detail about the information potentially compromised, along with the type of cryptography used to secure their user’s “Master” passwords.

The actual compromise of the ‘server per user salts’ and the ‘authentication hashes’ would allow the attackers to brute-force a targeted user’s password, but LastPass is claiming this information has been created using what is known as a ‘key derivation function’ called PBKDF2, considered best practice.

This makes it extremely difficult for attackers to brute-force the passwords in bulk and instead limit attackers to cracking one password at a time – meaning they would have to target a particular user (or use many computers to target multiple users).

However, the weakest link here is the compromise of ‘email addresses’ and ‘password reminders’.  Two likely scenarios come to mind that may arise as a result of this compromised information:

(1) Phishing attacks to LastPass users is now very likely, if the attackers choose to send email pretending to be from LastPass to trick them into divulging their Master passwords.

(2) The password reminders may give the attackers clues when attempting to brute-force a password. Some users are known to provide password reminder clues that are very easy to interpret that almost reveal the password in full immediately.

Worse, the addition of the password reminder information to a phishing email may increase the success of that type of attack.

LastPass is right to advise all their users of this compromise, and hopefully all LastPass users are able to heed the warning and change their Master password, plus activate multi factor authentication options.

The positives in this case, however, appear to be the best practice use of cryptography in their storage of master passwords (i.e. PBKDF2) and the failure to access ‘encrypted data’ (stored passwords and Master Passwords). This is potentially down to LastPass having separate systems for this sensitive data.

If the attackers had been able to compromise the ‘encrypted user data’ then LastPass would surely be advising their users to not only change their Master password, but every other password stored within their accounts – and this would be a monumental task for all concerned.

Single sign-on with OpenConnect VPN server over FreeIPA

In March of 2015 the 0.10.0 version of OpenConnect VPN was released. One of its main features is the addition of MS-KKDCP support and GSSAPI authentication. Putting the acronyms aside that means that authentication in FreeIPA, which uses Kerberos, is greatly simplified for VPN users. Before explaining more, let’s first explore what the typical login process is on a VPN network.

Currently, with a VPN server/product one needs to login to the VPN server using some username-password pair, and then sign into the Kerberos realm using, again, a username-password pair. Many times, exactly the same password pair is used for both logins. That is, we have two independent secure authentication methods to login to a network, one after the other, consuming the user’s time without necessarily increasing the security level. Can things be simplified and achieve single sign on over the VPN? We believe yes, and that’s the reason we combined the two independent authentications into a single authentication instance. The user logs into the Kerberos realm once and uses the obtained credentials to login to the VPN server as well. That way, the necessary passwords are asked only once, minimizing login time and frustration.

How is that done? If the user needs to connect to the VPN in order to access the Kerberos realm, how could he perform Kerberos authentication prior to that? To answer that question we’ll first explain the protocols in use. The protocol followed by the OpenConnect VPN server is HTTPS based, hence, any authentication method available for HTTPS is available to the VPN server as well. In that particular case, we take advantage of the SPNEGO, and the the MS-KKDCP protocols. The former enables GSSAPI negotiation over HTTPS, thus allowing a Kerberos ticket to be used to authenticate to the server. The MS-KKDCP protocol allows an HTTPS server to behave as a proxy to a Kerberos Authentication Server, and that’s the key point which allows the user to obtain the Kerberos ticket over the VPN server protocol. Thus, the combination of the two protocols allows the OpenConnect VPN server to operate both as a proxy to KDC and as a Kerberos-enabled service. Furthermore, the usage of HTTPS ensures that all transactions with the Kerberos server are protected using the OpenConnect server’s key, ensuring the privacy of the exchange. However, there is a catch; since the OpenConnect server is now a proxy for Kerberos messages, the Kerberos Authentication Server cannot see the real IPs of the clients, and thus cannot prevent a flood of requests which can cause denial of service. To address that, we introduced a point system to the OpenConnect VPN server for banning IP addresses when they perform more than a pre-configured amount of requests.

As a consequence, with the above setup, the login processes is simplified by reducing the required steps to login to a network managed by FreeIPA. The user logs into the Kerberos Authentication Server and the VPN to the FreeIPA managed network is made available with no additional prompts.

Wouldn’t that reduce security? Isn’t it more secure to ask different credentials from the user to connect to the home network and different credentials to access the services into it? That’s a valid concern. There can be networks where this is indeed a good design choice, but in other networks it may be not. By stacking multiple authentication methods you could result in having your users trying the different credentials to the different login prompts, effectively training the less security-oriented to try the passwords they were provided anywhere until it works. However, it is desirable to increase the authentication strength when coming from untrusted networks. For that, it is possible, and recommended, to configure FreeIPA to require a second factor authenticator‌ (OTP) as part of the login process.

Another, equally important concern for the single sign-on, is to prevent re-authentication to the VPN for the whole validity time of a Kerberos key. That is, given the long lifetime of Kerberos tickets, how can we prevent a stolen laptop from being able to access the VPN? That, we address by enforcing a configurable TGT ticket lifetime limit on the VPN server. This way, VPN authentication will only occur if the user’s ticket is fresh, and the user’s password will be required otherwise.

Setting everything up

The next paragraphs move from theory to practice, and describe the minimum set of steps required to setup the OpenConnect VPN server and client with FreeIPA. At this point we assume that a FreeIPA setup is already in place and a realm name KERBEROS.REALM exists. See the Fedora FreeIPA guide for information on how to setup FreeIPA.

Server side: Fedora 22, RHEL7

The first step to install the latest of the 0.10.x branch OpenConnect VPN server (ocserv) at the server system. You can use the following command. In a RHEL7 you will also need to setup the EPEL7 repository.

yum install -y ocserv

That will install the server in an unconfigured state. The server utilizes a single configuration file found in /etc/ocserv/ocserv.conf. It contains several directives documented inline. To allow authentication with Kerberos tickets as well as with the password (e.g., for clients that cannot obtain a ticket – like clients in mobile phones) it is needed to enable PAM as well as GSSAPI authentication with the following two lines in the configuration file.

auth = pam
enable-auth = gssapi[tgt-freshness-time=360]

The option ‘tgt-freshness-time’, is available with openconnect VPN server 0.10.5, and specifies the valid for VPN authentication lifetime, in seconds, of a Kerberos (TGT) ticket. A user will have to reauthenticate if this time is exceeded. In effect that prevents the usage of the VPN for the whole lifetime of a Kerberos ticket.

The following line will enable the MS-KKDCP proxy on ocserv. You’ll need to replace the KERBEROS.RELAM with your realm and the KDC IP address.

kkdcp = /KdcProxy KERBEROS.REALM tcp@KDC-IP-ADDRESS:88

Note, that for PAM authentication to operate you will also need to set up a /etc/pam.d/ocserv. We recommend to use pam_sssd for that, although it can contain anything that best suits the local policy. An example for an SSSD PAM configuration is shown in the Fedora Deployment guide.

The remaining options in ocserv.conf are about the VPN network setup; the comments in the default configuration file should be self-explicable. At minimum you’ll need to specify a range of IPs for the VPN network, the addresses of the DNS servers, and the routes to push to the clients. At this point the server can be run with the following commands.

systemctl enable ocserv
systemctl start ocserv

The status of the server can be checked using “systemctl status ocserv”.

Client side: Fedora 21, RHEL7

The first step is to install the OpenConnect VPN client, named openconnect, in the client system. The version must be 7.05 or later. In a RHEL7 you will need to setup the EPEL7 repository.

yum install -y openconnect network-manager-openconnect

Setup Kerberos to use ocserv as KDC. For that you’ll need to modify /etc/krb5.conf to contain the following:

[realms]
KERBEROS.REALM = {
    kdc = https://ocserv.example.com/KdcProxy
    http_anchors = FILE:/path-to-your/ca.pem
    admin_server = ocserv.example.com
    auto_to_local = DEFAULT
}

[domain_realm]
.kerberos.test = KERBEROS.REALM
kerberos.test = KERBEROS.REALM

Note that, ocserv.example.com should be replaced with the DNS name of your server, and the /path-to-your/ca.pem should be replaced by the a PEM-formatted file which holds the server’s Certificate Authority. For the KDC option the server’s DNS name is preferred to an IP address to simplify server name verification for the Kerberos libraries. At this point you should be able to use kinit to authenticate and obtain a ticket from the Kerberos Authentication Server. Note however, that kinit is very brief on the printed errors and a server certificate verification error will not be easy to debug. Ensure that the http_anchors file is in PEM format, it contains the Certificate Authority that signed the server’s certificate, and that the server’s certificate DNS name matches the DNS name setup in the file. Note also, that this approach requires the user to always use the OpenConnect’s KDCProxy. To avoid that restriction, and allow the user to use the KDC directly when in LAN, we are currently working towards auto-discovery of KDC.

Then, at a terminal run:

$ kinit

If the command succeeds, the ticket is obtained, and at this point you will be able to setup openconnect from network manager GUI and connect to it using the Kerberos credentials. To setup a VPN via NetworkManager on the system menu, select VPN, Network Settings, and add a new Network of “CISCO AnyConnect Compatible VPN (openconnect)”. On the Gateway field, fill in the server’s DNS name, add the server’s CA certificate, and that’s all required.

To use the command line client with Kerberos the following trick is recommended. That avoids using sudo with the client and runs the openconnect client as a normal user, after having created a tun device. The reason it avoids using the openconnect client with sudo, is that sudo will prevent access to the user’s Kerberos credentials.

# sudo ip tuntap add vpn0 mode tun user my-user-name
$ openconnect server.example.com -i vpn0

Client side: Windows

A windows client is available for OpenConnect VPN at this web site. Its setup, similarly to NetworkManager, requires setting the server’s DNS name and its certificate. Configuring windows for use with FreeIPA is outside the scope of this text, but more information can be found at this FreeIPA manual.

Conclusion

A single sign-on solution using FreeIPA and the OpenConnect VPN has many benefits. The core optimization of a single login prompt for the user to authorize access to network resources will result in saving user time and frustration. It is important to note that these optimizations are possible by making VPN access part of the deployed infrastructure, rather than an after thought deployment.  With careful planning, an OpenConnect VPN solution can provide a secure and easy solution to network authentication.

LastPass Has Been Breached: Change Your Master Password Now

Luckily no passwords were actually stolen in the attack on LastPass last Friday, according to the Company’s Blog: “In our investigation, we have found no evidence that encrypted user vault data was taken, nor that LastPass user accounts were accessed.” Nonetheless account email addresses, password reminders, server per user salts, and authentication hashes were compromised.

Because of that everyone using the LastPass service will receive a mail, prompting them to reset their master password, according to the blog entry. On top of that the company will also require users who log in from a new device or IP address to verify their ID via mail if multifactor authentication is not enabled for the specific account.

Considering your stored passwords the blog says: “Because encrypted user data was not taken, you do not need to change your passwords on sites stored in your LastPass vault. As always, we also recommend enabling multifactor authentication for added protection for your LastPass account.”

So apparently there is no need to change every password you have stored with them. You can if you are really really concered for your accounts, but according to LastPass there is no need for it. Just make sure none of the other passwords you use is the same as the master password of your LastPass account.

The post LastPass Has Been Breached: Change Your Master Password Now appeared first on Avira Blog.

Emojis: We Want To Be Your New PIN

Intelligent Environments solution to your run of the mill 4 digit PIN is not some pill you swallow or “secrets” you and your smartphone share. Their idea involves lots of little pictures so called emojis, that will replace your accounts’ PIN. The emojis are the evolved smilies that sometimes really remind you of the god old Windows cliparts. You normally use them when chatting on WhatsApp (or any other app really) with your friends and family.

Now you might ask yourself the same thing I did: Why would I ever replace my trusty old PIN? The answer to that question is pretty simple. A normal PIN which you would use in order to secure your account, most of the time only uses four digits from 0 to 9. This means that a traditional PIN has 7290 unique permutations of four non-repeating numbers. An emoji Passcode that relies on a base of 44 emojis would sport 3,498,308 million unique permutations of non-repeating cute little images.

According to Intelligent Environments there are other advantages as well apart from being mathematically more: “This new emoji security technology is also easier to remember as research shows humans remember pictures better than words.”  And memory expert Tony Buzan adds: “The Emoji Passcode plays to humans’ extraordinary ability to remember pictures, which is anchored in our evolutionary history. We remember more information when it’s in pictorial form, that’s why the Emoji Passcode is better than traditional PINs.”

Well – I’ve had no issues so far when it comes to my four digit pin but I would certainly not mind using emojis at all!
1f4bb1f5121f5101f602

The post Emojis: We Want To Be Your New PIN appeared first on Avira Blog.

OPM: Are Personnel Records of All Fed Workers Exposed?

Two weeks ago OPM, the US Office of Personnel Management got hacked and the information of 4 million federal government workers was exposed. This is of course, horrible. But it’s not all: On Friday we learned that the issue at hand was huge and much bigger than everyone believed at first.

As can be read in a letter to OPM Director Karen Archuletta, David Cox, the president of the  American Federation of Government Employees, believes that “based on the sketchy information OPM has provided, the Central Personnel Data Files was the targeted database, and the hackers are now in possession of all personnel data for every federal employee, every federal retiree, and up to one million former federal employees.”

Cox goes on and says that the thinks the hackers have the Social Security number, military records and even veterans status’ information of every affected person. Addresses, birth dates, job and pay histories, health and life insurances and pension information, age, gender, and almost everything else you’d never want anyone else to know are included on his list as well.

Sounds bad? It’s not all. The letter states: “Worst, we believe that Social Security numbers were not encrypted, a cybersecurity failure that is absolutely indefensible and outrageous.”

I bet they now wish that “only” 4 million records got stolen … :(

The post OPM: Are Personnel Records of All Fed Workers Exposed? appeared first on Avira Blog.

Securing a Heterogeneous Internet of Things

Analyst firm IDC predicts that the number of Internet of Things (IoT) devices—from home appliances to commercial applications such as door locks and sensors—will grow into a $7.1 trillion market by 2020, compared to $1.9 trillion in 2013.

This rapidly growing market is giving rise to a land grab of sorts: companies are vying to build the one IoT platform that will link all devices, and by linking them make them “smarter” as they communicate with one another.

So it may come as no surprise that at its developer conference last week, Google announced Brillo, a new Android-based operating system (OS) for the Internet of Things. The connected OS promises to use as little as 32 or 64 MB of RAM to run, making it power-efficient and light enough for “things” such as light bulbs, keys or door locks.

By offering a familiar, widely used (Android) OS as the basis for its IoT platform, Google is offering a solution that is already familiar to developers worldwide. However, by offering yet another OS for Things, it also compounds the fragmentation of the space. There are a wide array of vendors and consortiums now offering operating systems, connectivity platforms and discovery protocols.

With each vendor and approach come security threats and attack vectors. These threat surfaces are multiplied by the connectivity and discovery protocols and by the routing of data. There is a trend to route data from each device to the cloud, even when, intuitively, this should not be necessary. This enables device manufacturers to utilize hardware, services and data business models. It is not a trend that is likely to slow down by itself.

Securing this spider web of technology and data is a challenge and a necessity. When a smart lock knows when people are home, or when your security camera sees where you put your valuables, they contain very valuable information for criminals. Less obviously, but just as worrisome, is the aggregate data about you that travels the airwaves in your home and beyond.

Brillo, being built on the mature Android platform, has the advantage of being hardened for security over time, and the disadvantage that nefarious players already know its ins and outs. Other, less widely deployed platforms will go through their own maturity evolution as developers and hackers dig through them.

Because of the vast number of suppliers of Things, and the wide variance of the platforms and protocols, a full security solution is unlikely to come from one of these players. The answer to the IoT security dilemma will more likely come via third-party security companies who’ll play a major role in providing secure, safe digital environments for users across connected devices.

To keep the Internet of Things from devolving into the Internet of Threats or the “Illusion of Trust,” the industry needs to shore up standards on privacy and security. Today, the IoT is still evolving rapidly, and its standards and regulations are just being developed. We’re at a moment in time that’s similar to the birth of the World Wide Web 25 years ago. This time, however, we can build a hyper-connected world based on safety and trust and the principles of protection and privacy—literally, we can build security into the foundation of the IoT infrastructure.

One of the fathers of the modern Web, Vince Cert, once said he regrets not building more security into the architecture of the Internet. It was difficult at the time to anticipate the level of cybercrime, cyberwarfare and cyberespionage that would emerge. The promise of the IoT is exciting, with many business and consumer applications, including the connected car and the connected home. But for our vision to come to fruition, let’s learn the lesson of our predecessors and design the IoT and its devices by prioritizing privacy and security as central features.

An area we are passionate about is what we call the “law of least data.” This encapsulates the desire for data to be routed as directly between agents as possible. Two devices in your home should not have to send data to the cloud – even if they are from two different vendors – when they are talking to each other. Your next generation smartwatch should not have to talk to the cloud in order to read data out of your pacemaker. Of course some setup, or discovery metadata, may be required upon installation, but thereafter data should be kept personal whenever possible.

By agreeing on some defining principles, such as the law of least data, we can build a better Internet of Things.

Flaw in Mail.app Can Be Used to Hijack iCloud Password

The flaw lies in the Mail.app, Apples default e-mail program for iOS. According to security researcher Jan Sourcek “this bug allows remote HTML content to be loaded, replacing the content of the original e-mail message. JavaScript is disabled in this UIWebView, but it is still possible to build a functional password “collector” using simple HTML and CSS.“ To reduce suspicion the code even detects if someone has already visited the page in the past by using cookies. If this was the case it stops displaying the password prompt.

This means that hackers could easily create phishing mails which show a form that looks exactly like the iCloud login pop-up window everyone knows. The user would be asked for their username and password, which – once entered – would then be transmitted to the cybercriminals.  Just take a look at the below concept-of-proof video to see how easy it would be to trick the unsuspecting user!

Sourcek discovered the flaw in January 2015 and informed Apple immediately. Since then no action has been taken in order to fix said vulnerability. In the hope that it will make Apple take the bug more seriously, the security researcher has now published his findings together with a proof-of-concept video and the corresponding code.

Feel free to follow this link in order to find out more about the issue.

The post Flaw in Mail.app Can Be Used to Hijack iCloud Password appeared first on Avira Blog.

The hidden costs of embargoes

It’s 2015 and it’s pretty clear the Open Source way has largely won as a development model for large and small projects. But when it comes to security we still practice a less-than-open model of embargoes with minimal or, in some cases, no community involvement. With the transition to more open development tools, such as Gitorious and GitHub, it is now time for the security process to change and become more open.

The problem

In general the argument for embargoes simply consists of “we’ll fix these issues in private and release the update in a coordinated fashion in order to minimize the time an attacker knows about the issue and an update is not available”. So why not reduce the risk of security issues being exploited by attackers and simply embargo all security issues, fix them privately and coordinate vendor updates to limit the time attackers know about this issue? Well for one thing we can’t be certain that an embargoed issue is known only to the people who found and reported it, and the people working on it. By definition if one person is able to find the flaw, then a second person can find it. This exact scenario happened with the high profile OpenSSL “HeartBleed” issue: initially an engineer at Codenomicon found it, and then it was independently found by the Google security team.

Additionally, the problem is a mismatch between how Open Source software is built and how security flaws are handled in Open Source software. Open Source development, testing, QA and distribution of the software mostly happens in public now. Most Open Source organizations have public source code trees that anyone can view, and in many cases submit change requests to. As well many projects have grown in size, not only code wise but developer wise, and some now involve hundreds or even thousands of developers (OpenStack, the Linux Kernel, etc.). It is clear that the Open Source way has scaled and works well with these projects, however the old fashioned way of handling security flaws in secret has not scaled as well.

Process and tools

The Open Source development method generally looks something like this: project has a public source code repository that anyone can copy from, and specific people can commit to. Project may have some continuous integration platform like Jenkins, and then QE testing to make sure everything still works. Fewer and fewer projects have a private or hidden repository, for one reason because there is little benefit to doing so, and many providers do not allow private repositories on the free plans that they offer. This also applies to the continuous integration and testing environments used by many projects (especially when using free services). So actually handling a security issue in secret without exposing it means that many projects cannot use their existing infrastructure and processes, but must instead email patches around privately and do builds/testing on their own workstations.

Code expertise

But let’s assume that an embargoed issue has been discovered and reported to upstream and only the original reporter and upstream know. Now we have to hope that between the reporter and upstream there is enough time and expertise to properly research the issue and fully understand it. The researcher may have found the issue through fuzzing and may only have a test case that causes a crash but no idea even what section of code is affected or how. Alternatively the researcher may know what code is affected but may not fully understand the code or how it is related to other sections of the program, in which case the issue may be more severe than they think, or perhaps it may not be as severe, or even be exploitable at all. The upstream project may also not have the time or resources to understand the code properly, as many of these people are volunteers, and projects have turn over, so the person who originally wrote the code may be long gone. In this case making the issue public means that additional people, such as vendor security teams, can also participate in looking at the issue and helping to fully understand it.

Patch creation with an embargoed issue means only the researcher and upstream participating. The end result of this is often patches that are incomplete and do not fully address the issue. This happened with the Bash Shellshock issue (CVE-2014-6271) where the initial patch, and even subsequent patches, were incomplete resulting in several more CVEs (CVE-2014-6277, CVE-2014-6278, CVE-2014-7169). For a somewhat complete listing of such examples simply search the CVE database for “because of an incomplete fix for”.

So assuming we now have a fully understood issue and a correct patch we actually need to patch the software and run it through QA before release. If the issue is embargoed this means you have to do so in secret. However, many Open Source projects use public or open infrastructure, or services which do not support selective privacy (a project is either public or private). Thus for many projects this means that the patching and QA cycle must happen outside of their normal infrastructure. For some projects this may not be possible; if they have a large Jenkins infrastructure replicating it privately can be very difficult.

And finally we have a fully patched and tested source code release, we may still need to coordinate a release with other vendors which has significant overhead and time constraint for all concerned. Obviously if the issue is public the entire effort spent on privately coordinating the issue is not needed and that effort can be spent on other things such as ensuring the patches are correct and that they address the flaw properly.

The Fix

The answer to the embargo question is surprisingly simple: we only embargo the issues that really matter and even then we use embargoes sparingly. Bringing in additional security experts, who would not normally be aware due to the embargo, rather than just the original researcher and the upstream project, increases the chances of the issue being properly understood and patched the first time around. Making the issue public will get more eyes on it. And finally, for the majority of lower severity issues (e.g. most XSS, temporary file vulnerabilities) attackers have little to no interest in them, so the cost of embargoes really makes no sense here. In short: why not treat most security bugs like normal bugs and get them fixed quickly and properly the first time around?

AVG begins bug bounty program

For AVG, helping to keep our 200 million users safe online isn’t just a question of reacting to threats as and when they appear. Instead, our security is built on a foundation of deliberate, pre-emptive action in order to keep their data and identity safe.

One way to be proactive is through a bug bounty program, which offers rewards to researchers that legally find and responsibly disclose vulnerabilities. By safely identifying and fixing vulnerabilities before attackers discover them, bug bounty programs help make software and websites more secure.

This extra security is one of the reasons I’m pleased to share that AVG has started a bug bounty program on Bugcrowd. Bugcrowd gives AVG the opportunity to have a well-established and respected community review its PC security products. This proactive approach to the security of our software will give our more than 200 million active users even more peace of mind and protection.

By starting a bug bounty program, AVG joins other companies like Google, Microsoft, Facebook and Apple taking that extra step to secure its users.

Microsoft Bug Bounty

How can you get involved?

The AVG bug bounty at Bugcrowd is currently focused on two of our PC based security products, AVG AntiVirus FREE 2015 and AVG Internet Security 2015.

If you think you’ve got what it takes to become a bug bounty hunter, you can see all the technical details here on AVG’s bug bounty page at Bugcrowd.