techno

Some news from W3C innovation lab’s

échiquier

I am spending some time this week in Beijing, with W3C folks and members. And there is one thing that I believe deserves to be known. Six months ago, W3C has changed its  organization. And. After having the big picture presented by the W3C staff to the new W3C members, it stroke me that this organization will definitely support the mision of W3C – which is, as everyone knows, leading the web to its full potential. So let me share with you how W3C is going to manage its strategy to shape the future, as the W3C management team explains it.
W3C has now a strategy team. Which role is to identify new topics to be standardized, and supporting its kickoff. This starts with canvassing the web, identifying some W3C members interests, organizing workshops when there is strong suspicion that a specific feature is trendy and that there is a room for a web related standard. Other insights are conversation, press reviews, members requests, identifying community of growing adopters.
Some examples of successful explorations.
W3C recently held a virtual reality workshop, gathering developers and technologist. Questions whether the open web platform could be extended and some track were confirmed. There is an active community group, will bring something to chartering.

Another successful workshop was related to blockchain and the web. The objective was about exploring the opportunity for exposure of blockchain to web apps, and the role W3C could have in improving interoperability – which is what a standard is about. A blockchain community group is currently working on defining the actual web related use cases and evaluating if technology adoption permits to switch to standard deliverables.
How to decide to kickoff new W3C working groups ? Involvement of W3C on a specific techno has to happen at the right point. Too early and the work will be slow and potential irrelevant, too late and the technology might be already fragmented. This has to happen, when the technology is mature, rolled out in devices supporting browsers, and when some minimum viable prototyping has been demonstrated for the web. This is why W3C is encouraging incubation work. Incubation produces prototype, design document, some code examples, and use cases. Based on this first level of assessment, W3C can decide to open a working group. This has also a collateral benefit for the W3C member’s patent commitment – which is about a member sharing for free its essential patents. Because, when signing that royalty free commitment, for a WG, members have a relative clear idea of the group deliverable.
Where is incubation happening? There is a platform to suggest any community group, with limited IP commitment. It can be used to engage conversation, drafting work, and gather people with common interest. Some CG are independant, and others are supported by some W3C team members. In addition, there is a very popular incubation community group, the Web Platform Incubation Community Group, which is known as the entry point to enrich the HTML set of specifications. Anyone can drop a proposal and get feedbacks from major browser makers and web platform influencers.

And how does a feature become a recommendation? Once an incubated work is felt ready by the W3C strategy team, a working group charter is designed. The W3C members will review it, and may finetune, or object. And this is how one new idea has a chance to be part of the famous list of W3C recommendations!

All W3C strategy team work can be followed under github https://github.com/w3c/strategy. Track it and contribute !

Starting a new innovation adventure? I said yes !

jumping-in-the-water-saint-malo

How long have I been wandering in the tech standard ecosystem ? 12 years ! Yeap, 12 years supporting the great ideas of advanced products, building with customers, competitors, partners some industrial solutions. Changing market or technology focus every year, to keep the interest up and learning things. In the telecom area, in the banking field, in the mobile planet and for the sake of the web. I have been proudly representing my company in ETSI, GSMA, SIMAlliance, GlobalPlatform, W3C, OWASP, FIDO Alliance. All those organizations may not sound familiar for you as they are mainly dedicated to B2B markets, or specific to the security industry, but this is where the actors of their respective fields do agree on common technology. Balancing their own interest, with power, smartness and being vision driven (well, …).

And ? This is the end of that series of travels all around the world. Where I could have Sake with some friends, hamburger with others, german beers in gardens, and special moments with those tribes. All having their strong characters, their dedicated supporters, their eternal detractors, their fantastic chairmen and chairwomen (less usual). I met hundreds of people, able to make distinction between conflicting business interest and friendship. Being able to change their mind, learn and share their good advices and experiences. Each tribe was a new continent that I had to discover, understand, and convince (and ended loving them). Leaving that (most of the time) friendly ambiance is a hard decision.

I was offered a great job. Managing the communication of gemalto technology innovation. Cross market, cross company, cross people. The kind of job that you cannot decline when, like me, you used to spend your energy in valuing each pieces of innovation or creative person you met. I said yes. And that ‘yes’ moment was a great one – that I am still appreciating. I am currently making up my mind, drafting strategy, listing all great things that position will allow me to do. Trying to make sure to infuse the spirit of making, sharing, valuing people, stays in the plan, like I used to in all my previous missions. More in few weeks !

Picture : Saint Malo diving, by Nicolas Doreau

Machine Learning pour les simples humains…

machine-learning

Début Décembre, Nicolas Courtier m’a invitée à présenter devant un parterre d’avocat-e-s et de juristes plus ou moins averti-e-s les bases de l’intelligence artificielle. Ce colloque “Droit et Numérique” avait pour vocation plus large de faire état de la loi et des questionnements sur les disruptions technologiques telles que smart city, big data, machine learning et blockchain. Je ne peux que vous encourage à aller consulter le storify issu de cette journée riche en partage de compétence ou encore de parcourir mon court article sur quelques unes des questions évoquées (in english). Mais je partage également avec vous, cher lecteur assidu, la session que j’ai animée.

Machine learning, kézako. Le machine learning est un sous-catégorie de l’intelligence artificielle, ce qui ne rend pas le sujet moins intéressant. Cette discipline consiste à prédire le comportement d’un système, d’un humain, à partir d’un modèle – plus ou moins précis. Il s’agit donc de s’appuyer sur le passé pour prédire le futur. Oui, enterrons tout de suite notre fantasme d’une machine apprenante, en auto gestion, et qui règlerait les peines du monde à notre place. Parceque le machine learnign se nourrit du passé, précisément, le machine learning nécessite d’avoir une très grande quantité de données, de très bonne qualité. Et oui, le machine learning exige la quantité *et* la qualité. Pour en finir avec les grands principes, afin de déployer une solution à base de machine learning, il vous faut donc, du logiciel qui décrira le modèle de votre système, des data et un data scientist qui affinera le modèle.

Les promesses du machine learning. Elles sont de différents ordres, selon les cas d’usage, mais on peut songer à dupliquer la compétence humaine (comprendre, dupliquer un expert qui a fait de longues études, comme un médecin, un avocat, un travailleur dont la valeur repose sur la connaissance factuelle), on peut également songer à faire mieux que l’humain (en allant chercher des corrélations entre des événements auxquelles le cerveau humain n’aurait pas songé). Dans le domaine des services, c’est la même chose, le machine learning peut améliorer un service (par exemple une recherche par mot clé, ou une recommandation de produits dans un catalogue) ou encore en créer de nouveaux (une gestion intelligente des trolls sur Twitter par exemple).

Les applications gourmandes de machine learning. Potentiellement toutes. Mais les premiers acheteurs de la technologie sont le marketing (la prospection, le contenu sur mesure, le service après vente, le chat automatique…), les services de recommendation/search/match, le monde de la sécurité (pour la prédictions des risques, des fraudes). Tous les espoirs et fantasmes sont permis sur les domaines de la santé (la quêtes de l’immortalité et des diagnostics, meilleurs et au bon moment) et les voitures intelligentes …

Le machine learning, une science qui s’invente. Comme toutes les autres disciplines qui émergent et sont projetées dans notre monde, le machine learning est soumis aux courants hype tels que l’open source, le crowdfunding, le cloud, la création de communauté… La multiplication des initiatives autours du machine learning pour le rendre plus efficace, plus lisible, plus accessible, sont autant d’opportunité d’enrichir cette science.

Bref. Le machine learning est une technologie nouvellement sous les spots de l’actualité, comme blockchain, big data l’ont été ces derniers mois. C’est également une nouvelle opportunité d’agiter les modèles qui décrivent et conduisent notre monde. C’est l’introduction d’un usage encore plus intensif de la données, la fameuse richesse de notre siècle, et c’est également l’acceptation de plus de prédictif, d’à peu près, dans un monde complexe. Une tendance et des usages à observer de près donc…

Et les slides ? Machine learning pour les simples humains, version slide, c’est par ici. Enjoy !

Law and digital disruptions, examples of machine learning and smart city

law

As part of the amazing opportunities I get with my job, I have been invited to a one-day workshop, organized by the AFDIT, the french association of lawyers, specialized in IT and computing systems  (part of the International Federation of Computer Law Association IFCLA). This day aimed to have lawyers discussing the impact of technology on the laws, in public area or business area. The perimeter of the discussion was europe and US, thus some speakers from all around the world came and shared their experience. In order to educate and progress on major 2016 topics, the organizers, Nicolas Courtier and Yves Léon, selected the themes of the day as : smart cities, artificial intelligence and blockchain. Here are some interesting elements that were raised all along the day.

Smart city, what does it means ? We all heard about smart city : it is the promise to improve town management, population mobility, citizen service offer, by connecting all possible pieces of information and building some tailor-made services. That is the vision that some local politicians promoted during the day (Caroline Pozmentier, Stéphane Paoli), together with some French Tech actors. The other way to see it, explained by Art Langer from Columbia University is to position citizen in the middle of the town dynamic. By offering him or her better mobility, frictionless social relation, great work opportunities, better democracy, (which is great news for the humans). That vision suggests a potential coming race, among towns, to become the most attractive town in the world, in order to maintain growth in economics and population. All those improved services will be based on large data collection operations, or interconnection of databases. In order to do so, service providers may be required to have private and public actors collaborating, canvassing the city, the citizen and grab appropriate information. And then came the question of the privacy, which might be one of the most challenging questions in that model where the consumer is a citizen.

Big data privacy challenge in smart city. The relevance of the smart city services are relying on the consolidation of a set of data, for which confidentiality and anonymity are hard to garantee. In addition, this mixing of data set triggers the question of ownership and liability. Who would be owning the data and would be responsible for the the failure of data maintenance ? That question would any way have to be answered with the coming european regulation on privacy. As Massimo Attoresi from explained, this regulation mandates that all actors of a service handling data (collecting, processing, storing or destroying) have to take care of the data, by having clear process for user opt-in, transparency in usage, fairness in collection, data minimization (the less you take, the best it is), storage limitation, integrity and confidentiality and inform the user about potential leaks or incident. How to explain a clear purpose of data retention, when you don’t know which service will come from your data collection ? How can you assess the risks, when you have a dynamic system, with cross-system responsability ? How can you garantee anonimity when so much information, including geo-localized ones are collected ? Interesting questions that smart cities will have to answer…

Smart city opportunities. In case citizen consider smart cities as life improvment, some ways to roll out smart city could come with great benefit for the society. Without ignoring potential threat to citizen privacy coming with smart city, Philippe Mouron drafted for us some positive aspects of it. The idea to integrate citizen into service design could be a great way to improve service relevance. In addition, the collection of data, and the fact that data belong to the citizen may accelerate the movment of open data. Philippe advocated also for a better mixing og legal and tech know how in the lifecycle of devices, in order to make sure, that all do see an interest in “the silence of the chips” (aka, users being in control for stopping data collection and leak towards to servers).

What about machine learning ? We discussed during that day the concept of machine learning. I reminded the audience its basic principles. You know. The fact that machine learnng is a sub-categoty of artificial intelligence, which consists in predicting the future (or the most probable one), based on past data. I listed the required skills and tools to roll out machine learning based services (aka, software, some good pieces of data, a smart scientist fine tuning your model). I reminded the audience the first use cases benefiting the machine learning, which are marketing, search and recommendation, security, health and smart cars. One of the main take away that I asked people to remind was the fact that we were switching from a determinist world (where each line of code is describing a possible situation, and where programs take well know roads), toward a world where we describe our environment with a model, with more or less errors and accuracy. Based, on that I took the opportunity to raise questions that machine learning triggers for me, such as privacy, liability and error management. And I got few answers from the other speakers.

What could be the legal impact of machine learning ? @rubin demonstrated how the machine learning could impact the legal business, replacing some assets of the lawers and potentially introducing a better undertanding of risk and gains around trials. Rubin also reminded that the law was not designed for robots, but for human and insuring fair interactions among humans, including in business situation, leveraging on technology. He gave some intersting perspectives on how to pave the way towards a mastered artificial intelligence deployment, based on few principles. Clear responsability, transparence of artificial intelligence in decision making (specially for the ones suffering the decision), efficient maintenance and regular audit of the artifial intelligence systems involved in services, and lastly, a permanent possibility to challenge the results of services based on artificial intelligence. Those principles based on good will and fair relation were good to hear and could be integrated in any strategy embedding machine learning, now.

My take away from that leal and tech workshop. Yes, definitely, mixing of perspectives and visions are key to have everyone progressing in understanding a transversal topic such as technoogy in society. And. The topic of ethic in software is definitely an additional item to add in our watch list, together with the privacy expectations.

 

Pitch and Play !

Pitch and Pitch. Last week I have been part if a gemalto team organizing a hackathon, on security topic, with some great dev, tech architects, product managers and marketing folks. We spent 3 days playing the game of being a start up. And, obviously, we had to play the game of the pitch. This kind of standard exercice, where a jury expects from you, all the energy, all the positive power, to decide to bet on your project. That formal presentation mandates that you cover important stuff such as purpose of your project, ultimate value proposition, amazing business model, and potentially unveiling your heart, to convince everyone that investors can trust you, in rolling out the stuff you promised, to make them rich. Well, that is a short sumup of a pitch, but here is the spirit. And that is usually a lot of pressure.

Play and Pitch. This is where I believe the PitchCards project could help. I had a chance to handle a beta version. That PitchCards project is a game. It is about helping pitchers to pitch, with no fear. The purpose of the game, is to pitch on a pure exotic project. A project that you have to invent in 10 minutes by collecting, eyes closed, 3 cards. One for indicating which type of project you will work on (a connected device, a car, …) and two others that will express a domain, or a target (babies, dinosaur, …). Once your pitch is ready, you will have to pitch, present it in front of the other players. Purpose, business model, and all the nice story your imagination built. Your audience will listen carefully, and will have to feedback how was your pitch. this is trigerred by choosing random questions from a card deck. Where did you look at ? Did you breath correctly ? What is your motivation ? …

Pitch and Learn. I believe that this game is sooooo relevant in this special timing of a hackathon. This is a way to train your attitude, to educate your voice and your mindset to present something fun, removing the fear and the giant-attachement every startupers has with its own project. It is much more easy to receive a question related to your talk efficiency, while dealing with a fantaisist project, then speaking about the super-idea you have been working on during 3 days or 3 months, isnt’it ?

Buy the project. The PitchCards project will go live in January and you will have a chance to sponsor it, as it will land on Kickstarter. In the meantime, the team made of Will and Camille will improve, train, pitch and redesign the cards and concept. But definitely, as a beta tester I enjoyed, the concept and the spirit ! You might also, if you have any interest in pitch fun.

Is Hardware Based Secure Web Services a lost quest ? No. Well…

typewritting

As co-chair of the W3C community group aiming to offer to web developers the possibility to access to services provided by hardware token, I am receiving some questions on a regular basis about where does this work go …

Well. Executive summary. The good reasons for allowing a web app to access to secure services stored in a harware token, and the possible ways to implement that in browsers are ready. But this is still not in the W3C planet. This is in a form of a report, edited by Sébastien Bahloul, a Morpho guy, and discussed with W3C Community Group members.

In details. The good reasons for allowing a web developers to access to keys stored in a hardware toke, or to trigger a signature which can not be repudiated are detailed in the report. There are some specific industry examples, such as government e-services, or e-banking services, or commercial transaction, which requires legal binding, such as online signature. The potential users of this feature are legions. Basically, the european regulation, named eIDAS “regulates electronic signatures, electronic transactions, involved bodies and their embedding processes to provide a safe way for users to conduct business online like electronic funds transfer or transactions with public services”. To deploy such services on the web, the web developer needs to have some mean to access hardware token (or the web will miss that digital european trust promise). Other countries such as Bolivia, Uruguay, Argentina and Peru are also requiring similar technology.

The technical aspects. The technical proposal embedded in this report is made of two technical features. First. A way to implement the W3C Web Crypto API in hardware token. this is to allow the generation and the usage of a cryptographic key inside a token belonging to the user. Second. A way to digitally sign a transaction with a a key, again stored in a hardware token, and performing the signature confirmation via an interface the user can trust. Those two services are some of the building blocks to have a trusted web, where the user is in control of the credentials used to cipher or sign some data.

So what is wrong ? Well. This set of usages and technical feature were presented to a large group of W3C members during last W3C TPAC. And, nothing amazing happened. The browser makers were kindly requested to have a look at it. But they demonstrated low interest, while this topic has been discussed since september 2014. There might have a cultural problem here behing the slow progress of this topic in W3C. The online access to european government services is not a priority for the major browser makers. In addition most of the actors of the security have managed some hacks to be able to use smart cards or hardware token, like plugins. But this era is over, as plugins maintenance and attacks are getting more sensitive.

And what is next ? Next is about gathering the companies and countries interested in that feature, and start to demonstrate W3C that there is an important question here : do we want the web to get in the secure services, as requested by online signature and government services ? So if you are part of the actors believing this web feature is key, join the Hardware Based Secure Services CG, so that we can collectively work on creating a Working Group in W3C…

What’s happening with the W3C Web Crypto API ?

 

Well. The specification is finished !

[here a cheering to Ryan Sleevi, Mark Watson, Harry Halpin, who actually led the editorial stuff during this 4 years work].

Where is it ? You can read the most recent version here. It is this version that will be submitted to the W3C Director (Tim Berners Lee), in order to make it a real W3C recommendation. Crossing fingers.

Is it real ? Yes. During the lifetime of the spec we got major browser makers contributing and monitoring, aka, Google, Microsoft, Mozilla. Thus it is implemented. See http://caniuse.com/#feat=cryptography

Where is the interoperability proof ? The test coverage can be found here.

So. What is the future ? Consider things are moving on, and the group will soon enter its maintenance mode. the next action, once the specification is a Recommendation will be to listen to the market and add any new algorithm that will be widely used.

Thanks ! That was a long and passionated work in W3C. Thanks to all members and individuals who contributed…

 

 

Non-violent security talk for small and medium business @ BlendWebMix

In december, I was in a web conference, named #BlendWebMix, which gathers all kind of actors of the web economy, from investors to tech, including designers, influencers, politics, startupers, … Very diverse type of talks were given, 80, and 1800 people attended the event. I was selected to give a very short presentation on privacy and security. My challenge was : convincing a broad audience that the privacy was something each of us, as workers, should take action for, in 13 minutes. Here is the core of my message.

I am fed up with the usual talk in security which says ‘provide privacy by implementing some security or you will burn in the hell of bad reputation companies, together with Madison, Target, Yahoo, and potentially bankrupts”. You know, that Fear Uncertainty and Doubt (FUD). I tried another angle. I tried the non-violent path. And I believe there are at least two good reasons why people should give a chance (and budget and effort) to the privacy.

winogrand_banc

The first reason can be found on the optimistic side of the life. The good reputation. I have the feeling that in this digital storm of hacks, global attacks, social media bashing, the companies taking action to preserve the privacy of the users are playing a good game. And the user may know. And the user may appreciate it. And it may be a competitive advantage to invest and get rewarded for it.

garry_winogrand_mayor_john_lindsay_with_new_york_city_police_1969__printed_1970s_gwf_13_1000x232_q80

The second reason is the data protection, as defined by the european comission. There is a new directive that mandates every company to allow its user to keep an eye on their data. It is the result of long discussions related to the value of the citizen privacy in our digital world. That regulation will be applicable in May 2018, to all European companies or all non-european companies handling some European citizen data. Well, yes, 2018 is after tomorrow. Which gives you only tomorrow to ramp up in good practices and get ready. The threat; if you are not compliant with the regulation, will directly touch your wallet, as fees could go up to 4% of your benefits, as a company. Universities and public services are also submitted to this regulation.

What does this regulation say ? It says that users will have to explicitly opt-in for registering their data, they will be able to control what you are doing with the data, they will have the right to modify and delete their data. In addition the data portability will have to be provided. Finally, users will have to be informed about any breach related to their data. Data in this context, means any piece of information which characterized the user, name, address, but also geo-localisation, social media activity, any digital evidence left by the user that you are collecting.

Who is submitted to this regulation ? Any company which collects, processes, transmits, stores the data. This means, you, but also anyone touching the data closely or by far. For example, the monetization partners (ads), or your cloud providers.  Now you see what could be the impact !

Duing the talk, I started a new technic for getting the audience sensitive to the message. I asked them to pause a second, to close their eyes, to breathe, and think about one of their user. Lea, 30 years old, digital, agile, conscious citizen, caring about her privacy. I asked the audience to answer in the secret of their mind and heart, eyes still closed, the following questions : do you know what are the data from Lea that you are taking in your super-super application or service ? Do you know where are Lea’s data stored ? When was the last time you had a conversation about privacy and security at work ? I mean, not on Twitter, being scandalized by the global surveillance of the states, but wondering, in your own framework. Some of the people in the audience smiled, and I felt some of the questions touched of them. What about you ?.

Targeting to convince the audience in a smooth way to take action for the privacy of their users. I reminded that it was important for them to identify the data, understand their life cycle in their own service life cycle, define some weak points (aka, any entry point, transfert, storage…) and protect those points. The thing is that of you are a small company, you may not know where to start. My key message was. Well. Start with pragmatic stuff.

First. Talk about security, create conversation around it. For example. Make a 2 hours meeting with the project manager or whoever in the company coded the solution, with a global view. And together make a status of the different security measures done up to know. Make an accurate status.

Second. Look for security champion(s) in your team. Basically the one(s) who had a security training at school or who had the chance to work on a security sensitive project in the past and may share with others.

Third. Write a process. It could be a paper sheet on the cafeteria reminding, i) before you ship a new feature, ask John (the security champion) to have a code review, ii) before you sign a deal with a company, check its track record in security, …. Or it could be a professional methodology for bigger companies. Well, the objective is just to make sure that the question of the security is handled in the product life cycle, at company scale, and taken into account in the delays and deals. This relates to create a security company culture.

Fourth. Engage conversation with your partners, providers, ask them the basic question on their security investment. They might be able to prove that they actually take care of it. With certification, or being able to tell you a nice story about their effort in that matter. Just like any company should be prepared to.

Fifth. Crash test your product. Some bug bounties platform are now existing. You can submit your product, it will be attacked by some hackers, and if some security vulnerabilities are found, you will be informed. The next level or complementary action could be to perform an audit of your code, or have actual security certification (but I guess that if you are on a market where security certification scheme exists, you might already be a security aware company).

Sixth. Have a monitoring of the security news. Read some newspapers specialize din sec, or some forum alerting on vulnerabilities. It would be a pity if your service bim-bam-boum were based on a framework which has been seriously hacked, and that you are not aware of.

In the end. Six possible concrete actions. To be rolled out by any non-expert security. I asked again the audience to close their eyes. And to pick in that list one action, just one action. And promise, in the secret of their mind to do it, Monday morning, when coming back in the office.Hoping next Monday some SMB will enter the way of improving privacy of their services….

Note : all picture copyrighted Garry Winogrand

Some news on the Trusted Execution Environment side…

 

lock-bridge-2

Few time ago I wrote about the Trusted Execution Environment (TEE), and how promising it was. Few months ago, I mentionned the arrival of Trusty TEE in Android, an API allowing mobile application to interact with TEE based services. One can still wonder in 2016, where is that technology positioned.

A reminder about what is TEE. Well, it is always an isolated environment, shipped into smart phones, offering a way to deploy some code that will be securely stored and executed. It could support any mobile application that may require some sensitive operations and a trusted user interface, to insure what you see is what you sign.

But the major question when we come to nice technology is : “yeap, your stuff sounds cool, but, who on earth is using it ?”.

Well. Let’s see the facts. On the GlobalPlatform website, you can find 8 products that did success in the functional official certification. You can check this yourself here. Among the certified vendors, one can note Samsung.

And what does the silicon valley say about it ? A recent event allowed to have an overview of the market. It was the TEE Seminar that happened in October in Santa Clara. This is a regular seminar which is gathering the usual suspects of the TEE eco-system. Speakers include ARM, Visa, Trustonic (one of the well known TEE provider, a gemalto owned company), FIDO Alliance, Linaro (which offers an open source version of a TEE, named OP TEE), Ericsson, Verimatrix (guys in the game of the content distribution and IP TV), plus gemalto (my company) and G&D (one of my company competitors). The key topics of the TEE this year was Internet of Things. While the TEE technology seems to be distilled in the smartphone market via official products (see Samsung statement, Android Trusty TEE API and Secure Enclave [PDF] in iOS), the next wave ready to take benefit of it is about Internet of Things.

Any diverging creative geeks interested ? In the same October month, there was an interesting event which happened also in the silicon valley. A TEE hackathon #BuildWithTEE, dedicated to get benefit of the technology. It was organized by BeMyApp and GlobalPlatform. It happened that 100 people joined the hackathon over the week end. The pitch exciting moment was made of 22 smart ideas, 12 went until the end of the sunday and 3 winners shared 10 000 US dollars. The material provided to participants was a Linaro Open TEE loaded in a Raspberry Pi 3, and all they had to do was to play with Linux and impelment thier idea, with the objective to use key asset of the TEE, aka the security, on a client or a server side. Ideas that won were about monitoring door lockers when renting your house, deploying a privacy respectful tracking system, a centralized password management server. The IoT use cases were the major ones that the creative geeks wanted to explore.

So, to conclude, the TEE is a technology alive and kicking and will definitely support nice innovation in the field of all-and-everything-connected !

Note : Picture from https://www.pinterest.com/cdnmomma/for-the-of-europe/

 

 

 

Tadaaa, Trusty débarque dans vos téléphones…

cadenas_pont_des_arts

TL;DR. Trusty débarque dans vos téléphones, c’est un framework d’execution sécurisé, c’est cool, vos données ou les opérations sensibles de vos appli mobiles en bénéficieront. Et je vous explique ici comment, avec des mots simples – pour les gens qui ne sont pas des geeks de la sécu.

Votre mobile et la sécurité (mise en jambe du sujet). Les téléphones mobiles accueillent de plus en plus de données sensibles, relatives à notre vie personnelle, sociale et professionnelle. Si l’on a longtemps considéré que les attaques les plus courantes et coûteuses se passaient sur des systèmes informatiques centralisés, tels que des serveurs ou des systèmes IT, force est de constater que l’attention se porte maintenant, aussi, sur les téléphones mobiles. Des applications chargées sur un téléphone peuvent embarquer du code silencieux et effectuer quelques opérations inappropriées sans l’accord de l’utilisateur. La plupart des applications officielles, disponibles sur les portails d’application populaires, subissent une vérification de code. Mais il se peut que le code d’une application malveillante exploite des vulnérabilités non-encore déclarées du téléphone. Bref. Ce renforcement des attaques logicielles sur les environnements embarqués, en plus grand nombre et plus pointues a forcé les concepteurs des environnements d’exécution, tels que Apple, Google, Microsoft à renforcer encore les outils pour protéger leurs produits des attaques logicielles. Ce sont ces outils que nous vous proposons de passer en revue dans cet article.

La sécurité intrinsèque des mobiles (pour ceux qui avaient un doute). Les environnements d’exécution comportent des mécanismes qui permettent de les protéger d’un chargement trop facile d’application malicieuse. Les applications officielles sont en général signées par le fournisseur de service et/ou par fabriquant de téléphone, cette signature inclut la vérification des permissions de l’application, à savoir les librairies auxquelles cette application pourra accéder pendant son exécution. Il arrive aussi fréquemment que avant même que l’OS du téléphone boote, l’OS vérifie la légitimité de chacun de ses constituants, driver de périphérique, middleware, librairie applicative. C’est le principe du secure boot.

Les fonctions de sécurité applicative. On trouve également dans les environnements iOS, android, WindowsPhone et BlackBerry OS des fonctions, mises à disposition des développeurs d’applications, qui leur permettent de renforcer leur application. On trouve ainsi dans la dernière version de android Marshmallow, des packages tels que android.hardware.fingerprint pour gérer les empreintes digitales, android.security.keystore pour générer des clés et effectuer des opérations cryptographiques. Il s’agit donc de mettre à disposition des développeurs des outils permettant de construire un modèle de sécurité plus robuste au sein même de leurs applications. On pourra donc rajouter une authentification de l’utilisateur par la vérification d’une empreinte digitale et la transmission de contenu entre le serveur et le client, chiffré ou signé pour en assurer la confidentialité ou l’intégrité (ou pourquoi pas les deux).

Le Trusted Execution Environement (nous y voilà). Les applications mobiles, intégrant les barrières de sécurité traditionnelles peuvent être soumises à des attaques de logiciel malveillants, résidant dans le téléphone, ou à proximité. Heureusement, l’art de sécuriser les environnements embarqués et ouverts, comme les téléphones, évolue et s’adapte. Ainsi, une nouvelle sorte de technologie a fait discrètement son apparition dans la planète mobile. Il s’agit du Trusted Execution Environment (environnement d’exécution de confiance, ou TEE). Penchons-nous quelques instants sur la définition de cette technologie. Quels en sont les mérites et les spécificités ? Le TEE est une technologie qui permet de garantir qu’un code d’application soit exécuté de manière sécurisé. Plus précisément, le TEE garantit que le code et les données d’une application ne soient pas modifiables ou lisibles par une application malveillante. Ainsi, l’intégrité et la confidentialité seront respectées pour une application, stockée dans le TEE. Cette technologie est définie par un organisme de normalisation nommé GlobalPlatform. Cette organisation regroupe des entreprises et industries provenant d’horizons différents, du fabriquant de composant pour téléphone, aux assembleurs de téléphone, en passant par les fournisseurs d’application bancaire ou les opérateurs téléphoniques. Les normes techniques du TEE décrivent donc les états possibles d’une application stockée dedans, le comportement en cas de détection de problème, les différentes librairies mises à disposition pour développer des applications. GlobalPlatform définit également des tests fonctionnels, permettant de démontrer une conformité fonctionnelle. Il existe également une méthodologie pour certifier la robustesse sécuritaire des produits embarquant cette technologie. Bref, le TEE est donc un objet technologique normé et certifiable.

Le TEE dans les téléphones, un mythe ? Non. Il a fait une discrète apparition dans les environnements de téléphone depuis quelques années, pour des fonctions internes au téléphone. Ainsi iOS mentionnait depuis quelques temps déjà une technologie appelée Secure Enclave, dont les vertus ressemblaient au TEE. Samsung indiquait que sa gamme de produit Knock dédiée dans un premier temps aux applications de production ou de gestion à distance de flotte de téléphone, reposait sur une technologie de type TEE. Récemment, c’est la plateforme android qui a clarifié l’usage de cette technologie. Ainsi, au début de l’année 2016, l’environnement android marshmallow met à disposition des développeurs un accès à la technologie TEE. Cette fonctionnalité est appelée Trusty TEE. Alors, en quoi consiste cette technologie ?

Trusty TEE, qu’est ce que c’est ? Tusty TEE, apparu dans Android 6.0  est une couche logicielle offrant les services d’un TEE. Trusty est composé de trois éléments : (1) un environnement d’exécution appelé le Trusty OS, (2) des librairies internes permettant d’accéder depuis le Trusty OS aux ressources linux, de manière sécurisée et de développer ainsi des applications sécurisées, et (3) une librairie permettant depuis l’environnement dit normal, d’accéder aux applications hébergées dans le Trusty OS. Il s’agit donc d’un environnement séparé du reste du téléphone, qui abritera des applications, dites sécurisées, pouvant être accédées par  des applications du monde normal, les traditionnelles applications android.

Quels sont les cas d’usage ? En théorie, un environnement d’exécution, privilégié, protégé contre les attaques logicielles comme l’est le TEE est très attractif pour protéger des applications sensibles. Plus exactement, puisque tout ne peut pas être exécuté dans un TEE, faute de ressource, on privilégiera d’utiliser le TEE pour l’exécution de fonctions sensibles. Par exemple, une comparaison de secret, une opération cryptographique comme la génération d’une signature, le stockage de secret, … La documentation d’android fournit une liste d’exemples pertinents, que sont les applications bancaires, les applications d’authentification, de DRM (oui, pardon..) …

Comment ça marche ? En pratique, pour le moment Trusty ne permet pas le développeur lambda de charger des applications sécurisées. Ceci reste le privilège du fabriquant de téléphone, au moment où il assemble les composants et intègre son code. Ainsi on pourra imaginer des applications permettant de gérer les empreintes digitales (capture, stockage et vérification) ou des applications bancaires pré-chargées. Pour utiliser des services sécurisés par l’environnement Trusty, il faut que chaque application soit déjà chargée dans l’environnement Trusty. Une fois chargée, l’application déclare les services qu’elle offre, grâce à une déclaration de nom (sous forme de domaine inversé, par exemple « com.mabanque.payment». Ce service est alors mis à disposition des applications dites normales, tournant sur l’environnement normal, dit non sécurisé.

Comment utiliser les services offerts dans Trusty (sinon, vous pouvez aussi lire la doc). Il existe une API Client et une API Serveur, qui permettent de mettre en relation une application sécurisée avec une application du monde dit non-sécurisé. A noter qu’il est également possible pour une application sécurisée de mettre ses services à disposition d’une autre application sécurisée. Voici en résumé comment tout cela se passe. Du côté de l’API Serveur, on déclare les ports grâce à port_create(), et on écoute l’arrivée d’événements grâce à une fonction wait(). Du côté de l’API Client, on ouvre une connexion avec un port connu par le biais de la méthode connect(), on se voit attribué un numéro de canal (dit channel). Une fois la connexion acceptée par l’application sécurisée offrant le fameux service, les applications peuvent échanger des messages en utilisant l’API Messenger pour transférer ses données, grâce aux fonctions send_msg() et get_msg().  Il n’existe pas de formatage particulier attendu pour le transfert de ces données puisque elles seront spécifiques aux applications. Néanmoins, au moment de l’ouverture du port et du chanel, on pourra spécifier si on souhaite une communication avec plusieurs buffers, et/ou de manière asynchrone.

En conclusion. La technologie permettant d’exécuter des morceaux de code de manière sécurisée, garantissant confidentialité et intégrité est en expansion. Preuve en est puisqu’elle se retrouve utilisable par des développeurs d’applications mobiles. On attend maintenant avec impatience les premiers services que les fabricants de téléphone mettront à disposition dans cet environnement Trusty.

Quelques références importantes. Oui.

Normes de TEE définies à GlobalPlatform : http://globalplatform.org/specificationsdevice.asp

Documentation Trusty https://source.android.com/security/trusty/

Note :

Picture by “Un savoisien à Paris” (http://savoieinparis.over-blog.com/article-cadenas-du-pont-des-arts-57653780.html)