The goal with the browser is to create an easy-to-use, secure and privacy respecting browser. These are the more advanced tactics we will be using:
Our Cloud DBs
Adding cloud features to file scanning was a large success. The detection quality of malicious files went straight up. Short:
On the client there is a behaviour detection kind of pre-selection. If a file is suspicious the cloud server is asked if the file is already known
If unknown:
- An upload is requested
- The file is uploaded to the server
- There we have several detection modules that cannot be deployed on the customers PCs (an AI with a large database, sandboxes for behavior classification, etc. ). They scan and classify the file
- The database is updated
- The results are sent back, you are protected
We built incredible databases covering malicious files during the last years. We should have something similar for the browser and use our large knowledge base and server side classification tools for web threats as well.
It should look something like that:
- The browser detects something strange (“behavior detection”), this is called pre-selection
- It asks the backend database if this is already known
- If not: relevant data (URL, file, …) is uploaded for inspection
- Our server based tool (and our analysts) will classify the upload and update our databases
- The result is sent back directly (within milliseconds. Yes, the tools are that fast. We will try to improve our analysts )
- You are protected
- We are improving our “evil parts of the internet” map.
To get there we will have to improve the signal-to-noise ratio. We are only interested in malicious pages. If the pre-selection in the browser is too aggressive and sends non-malicious pages to us, it‘s a waste of CPU cycles and bandwidth. With millions of users as a factor, even minor slips will be expensive and annoying for everyone involved.
We will also remove private data before sending it (we are not interested in user data. We are spying on malware). Personal data is actually toxic for us. Servers get hacked, databases stolen, companies gag-ordered. Not having that kind of data on our servers protects us as well as you. I mean just think of it: Some web pages have the user name in the URL (*/facepalm*). I do not think we can automatically detect and remove that trace of data though. But maybe we could shame the web pages into fixing it …*/think*
The parts in the source that collect the data and prepare them for sending are Open Source. Here I am asking you to NOT trust us and review the code!
I hope we find a simple solution to display the data being sent to us before sending. The only problem is that it could have a negative impact on your browsing experience. Having a modal dialog when you expect a page to load …
One option could be to at least offer a global configuration to switch cloud requests off (always, in incognito mode only, never) and show you in logs what got sent.
Advertising
We are selling libraries and databases covering malicious files and web pages.
You want your own AV? Or protection technology in your Tetris game to make it unique? Just contact our SI department and make a deal.
Other companies have thousands of web-crawlers simulating user behavior to identify malware.
Millions of real Avira users are our scouts and sensors.
Some branding
We need some branding. That would include Avira specific changes in the browser (names, logos, some other texts). But also links. This is not only relevant for brand-awareness but also to keep our users away from Chrome/Chromium support to avoid confusion (“Which Chrome version do you have ?” … listens … “we never released that, can you please click on “about and tell me the version number” … listen … “WTF?!?” => Confusion) and direct them to our support – who actually CAN help.
Hardening
We will always improve the build process. There are compiler switches for features called Position Independent Executable (PIE), Fortify Source, etc. that we should enable on compilation (many are already enabled). Most time here will be spent on ensuring that they do not get disabled by accident, are enabled on all platforms, and do not slow down the browser. This task can start simple and suddenly spawn nasty side effects. This is why we need TestingTestingTesting.
TestingTestingTesting
Google added the Hotwords feature to Chromium and Chrome. It’s a nice feature. But it switches on the microphone and “spies” on the user (this is a convenience feature many users want). For our secure and privacy respecting browser this crossed a line though. This is the reason why we will have to verify that no “surprise !!!”-Extensions get installed by default. One more task for our testers that add verification tasks to the browser to handle our specific requirements. Keep in mind: Chrome and Chromium already have very good unit-tests and other automated test cases. We just need some extra paranoia. That’s the job for our testers in the team.
More transparency
We will write blog posts covering all the features. The attacks they block, their weaknesses, what we did and will be doing to improve them. We will offer you a guided tour Down the Rabbit Hole. Go with us as far as you dare.
TL;DR:
There is so much we can do to improve the browser; without touching the core.
We reached the bottom of this specific Rabbit Hole.
Thorsten Sick
#content .entry-content
.bq{width:100%;border:1px
solid #dde5ed;margin-top:0px;margin-bottom:25px}#content .entry-content
.quest{margin:0px;font-weight:bold;font-size:16px;text-shadow:0px 1px 0px #f8fafb;padding:6px
11px;background:#eaeff5;border-top:1px solid #f4f7fa;border-bottom:1px solid #dde5ed}#content .entry-content
.text{line-height:19px;margin:0px;padding:10px;font-size:14px;background:#f8fafd;color:#758fa3}#content .entry-content .text
p{line-height:19px;background:#f8fafd;font-size:14px;color:#758fa3}
The post Avira’s Secure Browser: Plans and Tactics (Part 2) appeared first on Avira Blog.