Tag Archives: chromium

Scout Browser goes under peer review

Scout Browser goes under peer review

It’s not enough to say you are good. The moment of truth is when an outside expert or peer takes a hard look at what you do – and then gives you an educated thumbs up. In academia, this is a peer review and it is essential for any worthwhile paper. For software developers, it […]

The post Scout Browser goes under peer review appeared first on Avira Blog.

Avira Scout: the browser for your online security & privacy

Avira Scout: the browser for your online security & privacy

Avira Scout has been launched for the public, combining our security expertise with the Chromium code to give users a free browser that slashes the risks from malware, malvertising, and obtrusive trackers. Scout provides multiple layers of security and privacy without interrupting the user experience. We’ve integrated together a selection of best-in-class security and privacy […]

The post Avira Scout: the browser for your online security & privacy appeared first on Avira Blog.

High-Severity Chrome Vulnerabilities Earn Researcher $32K in Rewards

Researcher Mariusz Mlynski found and disclosed four high-severity vulnerabilities in Chrome’s Blink rendering engine, earning himself $32,000 through the Chrome Rewards program.

Avira’s Secure Browser: Plans and Tactics (Part 2)

The goal with the browser is to create an easy-to-use, secure and privacy respecting browser. These are the more advanced tactics we will be using:

Our Cloud DBs

Adding cloud features to file scanning was a large success. The detection quality of malicious files went straight up. Short:

On the client there is a behaviour detection kind of pre-selection. If a file is suspicious the cloud server is asked if the file is already known

If unknown:

  • An upload is requested
  • The file is uploaded to the server
  • There we have several detection modules that cannot be deployed on the customers PCs (an AI with a large database, sandboxes for behavior classification, etc. ). They scan and classify the file
  • The database is updated
  • The results are sent back, you are protected

We built incredible databases covering malicious files during the last years. We should have something similar for the browser and use our large knowledge base and server side classification tools for web threats as well.

It should look something like that:

  • The browser detects something strange (“behavior detection”), this is called pre-selection
  • It asks the backend database if this is already known
  • If not: relevant data (URL, file, …) is uploaded for inspection
  • Our server based tool (and our analysts) will classify the upload and update our databases
  • The result is sent back directly (within milliseconds. Yes, the tools are that fast. We will try to improve our analysts 😉 )
  • You are protected
  • We are improving our “evil parts of the internet” map.

To get there we will have to improve the signal-to-noise ratio. We are only interested in malicious pages. If the pre-selection in the browser is too aggressive and sends non-malicious pages to us, it‘s a waste of CPU cycles and bandwidth. With millions of users as a factor, even minor slips will be expensive and annoying for everyone involved.

We will also remove private data before sending it (we are not interested in user data. We are spying on malware). Personal data is actually toxic for us. Servers get hacked, databases stolen, companies gag-ordered. Not having that kind of data on our servers protects us as well as you. I mean just think of it: Some web pages have the user name in the URL (*/facepalm*). I do not think we can automatically detect and remove that trace of data though. But maybe we could shame the web pages into fixing it …*/think*

The parts in the source that collect the data and prepare them for sending are Open Source. Here I am asking you to NOT trust us and review the code! :-)

I hope we find a simple solution to display the data being sent to us before sending. The only problem is that it could have a negative impact on your browsing experience. Having a modal dialog when you expect a page to load …

One option could be to at least offer a global configuration to switch cloud requests off (always, in incognito mode only, never) and show you in logs what got sent.

We are selling libraries and databases covering malicious files and web pages.

You want your own AV? Or protection technology in your Tetris game to make it unique? Just contact our SI department and make a deal.

Other companies have thousands of web-crawlers simulating user behavior to identify malware.

Millions of real Avira users are our scouts and sensors.

Some branding

We need some branding. That would include Avira specific changes in the browser (names, logos, some other texts). But also links. This is not only relevant for brand-awareness but also to keep our users away from Chrome/Chromium support to avoid confusion (“Which Chrome version do you have ?” … listens … “we never released that, can you please click on “about and tell me the version number” … listen … “WTF?!?” => Confusion) and direct them to our support – who actually CAN help.


We will always improve the build process. There are compiler switches for features called Position Independent Executable (PIE), Fortify Source, etc. that we should enable on compilation (many are already enabled). Most time here will be spent on ensuring that they do not get disabled by accident, are enabled on all platforms, and do not slow down the browser. This task can start simple and suddenly spawn nasty side effects. This is why we need TestingTestingTesting.


Google added the Hotwords feature to Chromium and Chrome. It’s a nice feature. But it switches on the microphone and “spies” on the user (this is a convenience feature many users want). For our secure and privacy respecting browser this crossed a line though. This is the reason why we will have to verify that no “surprise !!!”-Extensions get installed by default. One more task for our testers that add verification tasks to the browser to handle our specific requirements. Keep in mind: Chrome and Chromium already have very good unit-tests and other automated test cases. We just need some extra paranoia. That’s the job for our testers in the team.

More transparency

We will write blog posts covering all the features. The attacks they block, their weaknesses, what we did and will be doing to improve them. We will offer you a guided tour Down the Rabbit Hole. Go with us as far as you dare.

There is so much we can do to improve the browser; without touching the core.

We reached the bottom of this specific Rabbit Hole.

Thorsten Sick

#content .entry-content
solid #dde5ed;margin-top:0px;margin-bottom:25px}#content .entry-content
.quest{margin:0px;font-weight:bold;font-size:16px;text-shadow:0px 1px 0px #f8fafb;padding:6px
11px;background:#eaeff5;border-top:1px solid #f4f7fa;border-bottom:1px solid #dde5ed}#content .entry-content
.text{line-height:19px;margin:0px;padding:10px;font-size:14px;background:#f8fafd;color:#758fa3}#content .entry-content .text

The post Avira’s Secure Browser: Plans and Tactics (Part 2) appeared first on Avira Blog.

Avira’s Secure Browser: Plans and Tactics (Part 1)

The Gordian knot

In order to have a secure browser, security issues have to be fixed in a certain time frame. This sounds logically, right? For us that’s only a few days after we get to know about them. Chrome fixes vulnerabilities with every release, so we are also forced to release in sync with the Chrome releases. But every change we make in the Chromium source code causes merge conflicts. When changes made by us (and which are Avira specific) and changes made by Chromium developers overlap our tools cannot combine them together. After about 150 changes we had one conflict per week. This meant spending hours untangling code.

The sword to slice through the knot: We will not introduce differences to the Chromium code.

Let’s see the browser more like a Linux distribution (Ubuntu, for example). We select the best tools. Combine them. Maintain them. Optimize them.

Open Source Extensions

There are awesome security extensions for browsers out there. Let’s just invest some man-years, copying their features. We can make closed source versions of those extensions which are almost as good as the original – but OURS!

… just kidding …

We decided to say ‘hello’ to the communities and explained our plans to them. We already started to contribute and will contribute even more (we struggled with the foundation for the browser longer than expected, so we are a bit behind the original time frame – but more about that in another post). The first extensions are integrated, more are upcoming and planned. Efficient engineering. A win-win situation.

Contributing to Chromium

Only code differences between our browser and Chromium cause issues. If we want a security feature and contribute the code to Chromium we do not have differences nor merge conflicts. We accidentally protect more people than we have to, but nobody is perfect. 😉

We already did contribute a stash of changes that allow simpler branding (see below). But the HTTPS-Everywhere guys alone have a wish list of 2-3 large Chromium code changes. Our next steps will be to extend the extension programming interface (API) because we want more information available in the extensions. For example right now the encryption details (used cypher suite, Certificates) cannot be seen from an extension. That means that something like Calomel cannot be written for Chrome so far.

Contributing to 3rd party code

Chromium contains more than 100 third party libraries. They can contain vulnerabilities, bugs and flaws. When we find something we fix it and send the patches upstream (= to the authors). We are currently experimenting with the best way to release as many fixes per week as possible. As soon as we have figured out a good solution, we will inform you via another blog post.

Our own extensions

Of course we already integrated ABS (Avira Browser Safety) and our Safe Search. This is a no brainer. So let’s just move on.

Our external tools

Right now we plan on integrating our AV scanner into the browser. We already scan with the WebGuard, but the future of the internet is encryption (more HTTPS, o/). Webguard is a proxy, and scanning encrypted traffic with a proxy causes lots of crypto-headache. Luckily the browser does decrypt the data (it has to) as soon as it gets there: Scanning the content of the decrypted data packages directly inside the browser solves said crypto-headaches.

As of now WebGuard is fine. But of course we already plan for the future. When the future is here we will be ready – with scanning abilities in the browser.

This above are only about 50 % of what we plan on doing. Stay tuned for two more and rather advanced tactics that we plan on using and which will be described in the next blog post!

There is so much we can do to improve the browser. Without touching the core.

Halfway down the Rabbit Hole. Time for a break.
Thorsten Sick

The post Avira’s Secure Browser: Plans and Tactics (Part 1) appeared first on Avira Blog.