Challenges in IoT security
1.1. High Level Findings
Some issues were identified at a high-level. Interestingly, very few of these directly concern the hardware itself, and are more centred around software and documentation.
Reimplementation of functionality
Although silicon vendors provide board support packages, development environments, and documentation, it is still significant effort for a device manufacturer to join these together into a secure, finished product.
It is rare for a fully flexible, secure bootloader or firmware update mechanism to be provided. Application notes and sample code may assist, but are often far from complete solutions.
Cryptographic libraries, where provided, operate at a low-level. Developers must implement high-level functionality themselves. Cryptography is incredibly complex, and mistakes are often made.
Developers feel frustration at having to reimplement functionality, when they know that many other products will have very similar needs.
Silicon vendors can assist in this area by providing ready-made implementations that suit most applications.
Complex chip selection
The range and variety of microprocessors is extremely large. At the design stage, considerations such as amount of memory, power consumption, user-centric functionality and cost drive chip selection. For this reason, this type of information takes centre-stage in marketing material, datasheets, and comparison tables. It is very rare to find security functionality presented in such a manner.
Security functionality is currently poorly understood by engineers and developers.
As a result, it is common to find IoT devices with either lacking in security functionality, or containing unused features.
Silicon vendors should be more transparent around security functionality. This would include clear and consistent labelling of security functionality in datasheets, and providing details around the threat models that devices are designed to operate under.
Documentation and training
Developers often receive no specific training in developing secure products. Time-to-market is often critical, placing pressure on developers to build solutions quickly, relying on third-party code. Further to this, many developers involved with IoT have no formal embedded engineering background.
This means that security often takes a back-seat.
Providing documentation and training that is security-focused and aimed at software engineers would make developing secure products far easier.
Complex software supply chains
Many IoT products have extremely complex software supply chains, with multiple stages between the application developer and the silicon vendor.
This often makes it difficult for the application developer to fix vulnerabilities outside of their domain. Even if the vulnerabilities are fixed, they may never feed-back through to other device manufacturers.
The parties higher up the supply chain have little motivation to spend time and effort on security.
For this reason, it is vital that devices, software and documentation follow a secure-by-default pattern, where the default settings and implementation tend towards a secure rather than permissive.
Changing threat landscape
The threat landscape that embedded devices operate in is widely varied. For example, an internet connected oven that never leaves the home and contains minimal sensitive data is very different to a smartphone.
The range and type of attacks against hardware are constantly evolving. Code readout bypasses, differential power analysis, and active fault injection have only really become mainstream in the last 5 years.
The development life cycle for silicon is very slow compared to software, and presents some unique challenges. If silicon is shipped with bugs or vulnerabilities, it is possible that the only fix is physical replacement. This is obviously undesirable.
It is for these reasons that device manufacturers need to realise that hardware security is not binary and absolute. A processor considered secure today may have significant vulnerabilities found within the lifetime of a typical product.
Defence-in-depth must be practiced. This means that security is considered and implemented at every possible stage. For attacks to succeed, multiple security controls must fail. This prevents a system from
1.2. Required for a Secure Product
For a typical consumer IoT device such as a router, home-automation gateway, home-assistant or IP camera, the following recommendations are made.
A secure bootloader should be provided. Only signed images should be loaded by the device, and there should be the option to encrypt images for confidentiality.
A firmware update mechanism should be provided. Firmware should be downloaded over an authenticated and encrypted channel.
In combination, the secure bootloader and firmware updated mechanism should prevent firmware rollback attacks, where earlier versions with known vulnerabilities are loaded onto a device.
Dual-banked firmware should be supported, allowing for multiple firmware images to be stored. This allows for safe firmware upgrades.
The device should support a means to prevent the flash contents from being read, including the firmware, settings and any stored data. For a microcontroller, this could be achieved by preventing internal flash from being read out. For a microprocessor, this could be achieved by encrypting external flash.
It should be easy to entirely disable all debug interfaces on the device without risk of preventing firmware updates.
The device should have source of entropy available for cryptography. Ideally this should be a dedicated hardware random number generator.
A real-time clock should be provided to allow the device to properly check certificate validity.
Internal secure storage should be provided to store keys, certificates and other sensitive data.
A range of operating systems should be provided, including a real-time operating system and full Linux system. Tools should be provided to allow these to be hardened appropriately for production systems.
Cryptographic libraries should be provided to perform common operations such as encryption, signing, and hashing. These should have high-level interfaces and use secure settings by default.
Software provided for use on devices should use a permissive license which is compatible with other open source licenses.
Threat modelling should be performed against devices and software provided. The results of this should be openly stated in documentation.
Documentation and training should be improved with the goal of training developers in security.
1.3. Potentially Required for a Secure Product
In the current security environment and market, for a typical IoT product, the following are generally not required, but specific circumstances could make them necessary.
A dedicated secure element (SE) or hardware security module (HSM) would be considered excessive for many consumer devices. They provide additional resistant to physical attack and are more resistant to many other classes of software attack. For devices that contain significant sensitive data, or are at particular risk of physical theft, they could be considered.
Local side-channel attacks and glitching attacks are becoming easier to perform as knowledge and tools mature. These attacks are challenging to develop, and are highly dependent on the specific devices used and the firmware that runs on them. On the other hand, they are very hard to protect against using hardware, with most current general purpose embedded processors at least somewhat vulnerable. Generally, they should only be considered a viable threat if the reward is suitably high.
1.4. Not Required for a Secure Product
Physical tampering with the semiconductor packaging (e.g. decapsulation) should not be considered a viable threat. Mitigating against physical tampering is incredibly difficult, and generally requires additional hardware such as tamper switches, tamper grids, and back-up batteries. It would be exceptionally rare for a consumer IoT devices to contain secrets worthy of this level of protection.
Physically unclonable functions are considered a highly secure way of generating key and certificate material from intrinsic properties of the semiconductor device. Whilst conceptually simple, it is challenging to ensure that the output of the function is constant across all operating conditions. As of now, they would be considered overkill vs generating and storing a key securely using a local entropy source.
Battery-backed key storage uses SRAM to store sensitive data. If power is removed – either by removal of the battery, or triggered with tamper detection – then the keys are lost. This is found in pin-entry devices and other highly secure devices, but would be considered unnecessary on most IoT products.
The findings in this report have been drawn from experience and knowledge gained from many sources:
- Over 30 hardware security tests of a range of hardware from connected toys through to entire vehicles.
- Several code reviews of bootloaders and firmware update mechanisms.
- Several collaborative specification and design reviews for embedded systems.
- Multiple IoT security training courses delivered to developers and engineers across a range of industries.
- Discussions with developers and engineers at conferences and trade shows.
- Social media, both general IoT security discussion and polls and questions specifically for this document.
- Published vulnerabilities reported in hardware products over the last few years.
- Current IoT security guidelines published by various bodies.
- Experience working with a wide range of processor architectures and devices during hardware testing.
A list of just under 100 security related issues were drawn up, and then grouped and ranked. This identified patterns and aimed to find root causes. Where these root causes could be impacted by the silicon vendor, the issue and a recommended solution is presented.
For some of the more complex issues, detailed explanations are provided alongside examples of situations where the problem has occurred.
We also looked across the market at current embedded solutions and IoT platforms to identify any gaps in the market, as we perceive it.
3. Specific Recommendations
3.1. Device and Firmware
These recommendations specifically concern the device, bootloader, and firmware that runs on it. This area is closest to the current domain of the silicon vendor.
3.1.1. Secure bootloader
The bootloader is at the root of trust in embedded devices. A bootloader can protect against multiple attacks:
- Encrypted firmware images can protect the confidentiality of the firmware. This can prevent intellectual property, common key material, and prevent vulnerabilities being discovered.
- Integrity protected firmware images can prevent accidental firmware modification, such as a damaged flash memory or noise of data lines.
- Signed firmware images prevent an attacker from loading malicious firmware onto a device.
- Rollback protection prevents an attacker from loading an older signed firmware with known vulnerabilities.
A bootloader that implements this functionality will be relatively complex and use cryptography. In some architectures, there may be tight space constraints on the bootloader, often fighting with application code for space. In some applications, devices must boot extremely quickly, precluding the use of processor intensive functions.
Many semiconductor vendors provide a basic bootloader, sometimes with more complex variants described in application notes. It is rare for there to be a complete solution. This leads to device manufacturers developing their own bootloaders.
Often the application notes pay little to no attention to thread modelling. For example, it is very common to find the concepts of confidentiality and authenticity conflated. Encrypted firmware images are often automatically considered authentic. Many device manufacturers place high value on protecting the confidentiality of the firmware, whereas authenticity can be far more important.
- Provide a bootloader as capable as the device can support, ideally allowing encryption, integrity, authenticity and rollback prevention.
- Provide tools to encrypt and sign firmware images.
- Explain the protections the bootloader provides in clear language.
- If there are trade-offs or options involved, explain how these can degrade the security.
- Provide benchmarks indicating typical boot times.
3.1.2. Firmware update
All software will contain bugs and vulnerabilities. For devices of any complexity there should be a mechanism in place to update firmware remotely.
A firmware update mechanism shares the same basic goals as a secure bootloader, but with many more in addition:
- The channel over which firmware images is downloaded can be encrypted, protecting the confidentiality of firmware in-transit.
- The channel over which firmware images is downloaded can be authenticated, preventing the device from downloading invalid firmware images.
- The download mechanism can deal with the use of a slow or lossy communication channel such as GRPS.
- A firmware update mechanism can perform decryption, integrity, authenticity and rollback checks prior to the bootloader to prevent downtime.
- A firmware update mechanism can allow multiple images, allowing fall-back to a “last known good” status should there be a problem during upgrade. This is frequently termed “dual-banking”.
Given the proposed scale of deployment of many IoT products, firmware updates must be provided over-the-air (often termed Firmware-Over-The-Air or FOTA).
These updates should be able to be delivered automatically using a notification mechanism.
Firmware update mechanisms are sometimes provided by silicon vendors, but they very rarely address any of the security or reliability challenges.
Not all devices have the capability for their firmware to be upgraded safely over-the-air.
- Provide a firmware update mechanism that allows downloads over a secure and authenticated channel, potentially over a lossy communication medium.
- Allow for multiple firmware images to prevent devices being bricked in the field.
- Provide guidance for integrating the upgrade mechanism into applications to allow for automated firmware upgraded.
- Where some or all of this is not possible, this should be made clear.
3.1.3. Device hardening
One of the key differences between IoT and general-purpose computing hardware is the ease with which an attacker can access the device. If the target is the device itself, an attacker can simply purchase the device to reverse engineer it and discover vulnerabilities. If the target is the user of the device, hardware products are often installed in physically exposed locations, and can be accessed by malicious parties without detection.
Device manufacturers may want to keep data confidential for a number of reasons:
- Intellectual property – many embedded systems must make real-time decisions, and hence contain sensitive intellectual property.
- Class-wide keys – despite advice to the contrary, it is common to find key material that is common across an entire class of device.
- Per-device keys – most devices will contain some unique key material, whether than be certificates to access cloud services or a user’s WiFi password.
- Vulnerabilities – all devices contain bug and vulnerabilities, and it could be argued that preventing these being discovered is a valid layer of security.
For these reasons, IoT devices are often required to be hardened to attack by someone with access to the device.
It must be accepted that given time, skill and resources, an attacker can nearly always gain access to protected aspects of the device. The protection the device provides should be commensurate to the asset being protected; a device containing computer vision algorithms could be attacked for months by skilled attackers, whereas a home thermostat may only need to resist a short period of attack by a bored teenager.
Despite the requirement that devices are hardened to attack, device manufacturers often fail to do so adequately. Firmware and secret recovery often takes less than an hour. There are several causes of this:
- Multiple routes (e.g. serial bootloader console) can be found into a device, and only a subset of these have been closed.
- Documentation is not clear.
- Developers fear locking themselves out of devices.
- Backdoors are left in to diagnose faults and RMAs
- Development functionality is left in production hardware.
- Vulnerabilities in code readout protection allow controls to be bypassed.
- It should be possible to disable all debug interfaces (serial, JTAG, proprietary) on a device easily.
- Whilst in the state where debug interfaces are disabled, it should still be possible to update firmware.
- Documentation should make clear the means in which to secure a device.
- Some devices should provide functionality for authorised parties to unlock previously locked devices.
- The level of protection afforded by a device should be clearly stated.
- Vulnerabilities in code readout protection should be published and added as errata.
Random numbers are used in cryptography frequently. Key generation, challenge/response algorithms, nonces and initialisation vectors can all require a source of entropy.
If the entropy can be predicted or forced to be predictable, this can open the device up to many weaknesses.
For this reason, some devices have a hardware or true random number generator (TRNG). There use thermal, avalanche or environmental noise to generate entropy at a reasonable rate.
Not all devices have a TRNG. It is common to find software based schemes that use what is available to gain entropy e.g. a floating ADC input. The specific implementation of these schemes can render them insecure.
Most sources of true entropy limit the rate at which entropy can be drawn. Often this is slower than is required by the device. In this instance, the true entropy is used to seed a pseudo-random number generator, which can provide entropy at a faster rate.
- Hardware random number generators should be available on nearly all devices.
- The rate at which entropy can be drawn should be stated.
- If not available, a verified secure software scheme should be provided to generate entropy from other sources. The limitations of this should be made clear.
- A cryptographically secure pseudo random number generator (CSPRNG) should be provided to allow entropy to be drawn at a faster rate.
3.1.5. Secure storage
Devices are likely to store confidential data, whether that is key material used to secure communication, a user’s WiFi password, or keys and certificates used as part of a hardware root of trust.
In many cases, it is possible for a microcontroller to store data on internal flash or EEPROM memory with relatively strong protection afforded by simple memory read/write protection.
Some devices contain a secure element (often called a HSM or TPM), allowing small amounts of data to be held securely. These have protections above and beyond flash memory, and contain additional functionality such as the ability to sign and encrypt data using keys which cannot be read by normal means.
Some microprocessors have no internal flash. The prevents them storing sensitive data internally, and makes it difficult for them to encrypt external flash securely as there is nowhere to store the keys.
An attacker with physical access to a device will likely be able to recover the data given time, skill and resource.
- All devices should have the ability to store data securely.
- The protection afforded by the device should be clearly stated.
- Devices using external flash memory should have the ability to encrypt the flash with the keys stored securely. Libraries should be provided.
3.1.6. Per-device programming
Nearly all devices will need to be unique to an extent, whether that be a serial number, identifier, or key material. It is common to find that devices use identical firmware images, with the only differentiation being the MAC address of the network interface.
Per-device programming in a factory adds cost and complexity to manufacturing, but skipping this can lead to security vulnerabilities. Examples include:
- Deriving key material directly from MAC address, leading to predictable keys with low entropy.
- Devices lacking a means to strongly authenticate with cloud services, allowing clones and modified devices to connect.
- Common key material and passwords found across all devices
- Provide guidance and tools to allow secure per-device programming.
- Provide libraries and application notes for using secure crypto elements as an alternative.
- Work with IoT platform providers (AWS, Azure etc.) to provide ready-made applications to commission and interact with their platform.
The section covers the operating system, libraries, and application running on the device.
3.2.1. Operating system choice
A developer will have the choice between:
- Bare metal operation using a simple state machine or task scheduler.
- A real-time operating system such as FreeRTOS or VxWorks, providing tasks and memory management.
- A full operating system such as Linux
It is common to find that an inappropriate choice of operating system has been made. For example, a real-time operating system has been selected which has a limited selection of cryptographic libraries making communications insecure. Often a full Linux operating system is found, with the associated attack surface, when a bare metal solution would be adequate.
This can also impact hardware choice, and hence security. A Linux operating system will often force the use of a microprocessor with external flash, which can be harder to secure than a microcontroller with internal flash.
IoT developers frequently lack any formal training in embedded systems, and are under pressure to deliver products quickly. As a result the only viable option for many devices are the ready-made operating systems available.
- Make a range of operating system choices available for devices.
- Produce documentation comparing the available choices.
- Attempt to maintain and update the available operating systems.
3.2.2. Build hardening
Linux (and derivatives) are the most common operating system found on IoT devices of any size. Most home routers, IP cameras, thermostats, home assistants, and home automation gateways will be running Linux.
A full operating system is complex and can have a significant attack surface. Reducing this attack surface is a key part of maintaining a secure system, yet it is common to find problems:
- Unused and risky services such as telnet and web servers are found running.
- Accounts with default or no passwords are available.
- The Linux kernel is several years out-of-date and has known vulnerabilities.
- Software is not kept up-to-date and has known vulnerabilities.
- Guidance and tools should be provided for building a Linux system from scratch using systems like buildroot or Yocto.
- Hardening guidelines and configurations should be provided to make the Linux operating system more secure.
- Checklists should be provided for when alternative systems are used.
Cryptography is at the heart of most security functionality. Encryption of data at-rest and in-transit, the signing of firmware images, and hashing of passwords all rely on cryptographic functions. Whilst often conceptually simple, it is very easy to make mistakes when implementing cryptographic systems.
Broadly speaking, cryptography security failings can be grouped into several categories:
- “Roll your own” encryption, where custom algorithms have been written by developers. These frequently have very significant issues.
- Secure cryptographic primitives used insecurely, where software uses low-level functions built up into an insecure system.
- Configuration issues, where secure libraries and functions are configured in an insecure manner.
There are multiple causes of these issues:
- Developers generally have no formal training in cryptography or security.
- Silicon vendors often provide no cryptographic libraries, or they are complex to use.
- Cryptographic libraries that are available provide high degrees of flexibility, giving developers “rope to hang themselves”.
- Embedded systems can lack the capability to perform processor intensive cryptographic operations.
- Many businesses still have a “not invented here” attitude, wanting to develop all software from the ground up.
- Provide verified and secure cryptographic libraries for embedded devices.
- Libraries should provide high-level operations using secure cryptography configured in a secure manner.
- Cryptographic libraries should follow best practice for coding standards, including constant time operations and memory zeroisation.
- Benchmarks should be provided for common operations.
- Documentation should be provided to assist developers making informed choices around cryptography.
- A real-time clock should be available to make checking for certificate validity possible.
3.2.4. Supply chain length
It is becoming less and less common for a single company to design, develop, manufacture and operate IoT products. Frequently, devices are white-labelled from one company, manufactured by another, and running code developed by yet another. This has increased the length of the software supply chain significantly.
For example, many digital video recorders on the market use a HiSilicon processor, which is ARM based. They provide a toolchain, basic Linux operating system, and some drivers for video hardware. A large company called XiongMai then produce reference PCB designs, a full operating system, UI and web interface. Multiple companies then produce the hardware itself, with a degree of customisation. Finally, a reseller will white-label the device with logos and minor tweaks, before it is sold to the customer and then onto end-users.
Security requirements from the end-users rarely make it back up the supply chain. Security updates rarely make it down from the silicon vendor to the end-user. This leads to devices being shipped vulnerable and staying vulnerable.
- Provide a board support package, bootloader, and means of building an operating system with a known good level of security.
A significant number of software components are open-source licensed. The legal terms of these licenses vary greatly. Some are very permissive, and do not require changes to be published. Others require the full source and toolchain to be released. Interpreting licensing terms can be extremely complex. Not complying with open-source obligations can lead to legal and reputational risk.
It is very common to find device manufacturers forego their obligations in this area, and either avoid the area entirely or publish more than they are required to.
- All libraries release by semiconductor vendors should use a permissive and compatible open-source license.
- Guidance should be provided to help device manufacturers comply with open-source obligations.
- Giving back to the open-source community should be encouraged.
3.3. Documentation and Soft Issues
This section concerns documentation and soft issues such as training, community, and changing workforce.
3.3.1. Development challenges
All development presents challenges, and embedded systems have additional complexity. This is a broad area, but the following problems have been observed repeatedly over the course of many tests:
- Developer education of security issues such as buffer overflows, sanitisation and validation of user input, and compile time hardening is almost non-existent.
- The use of the secure development lifecycle and associated good practices such as use of version control and providing detailed release notes is rare.
- Applications written in low-level languages such as C and C++ make heavy use of unsafe operations such as memset, memcpy, and printf, resulting in exploitable buffer overflows.
- Compile-time hardening such as stack canaries, address space layout randomisation and non-executable stacks are rarely used.
- Specialist embedded engineers are becoming less common.
- Provide training material to understand the basics of security in embedded systems.
- Provide guidance on the use of secure development lifecycles and version control systems.
- Where possible, libraries should provide high-level functions that avoid the direct manipulation of memory.
- Development environments should use compile-time hardening options as defaults.
- Benchmark for compile-time hardening should be provided.
Discussions with a range of parties involved in embedded development have highlighted concerns around the transparency of semiconductor vendors. Some examples of this include:
- Access to full datasheets can require a commercial arrangement, which in-turn prevents smaller device manufacturers and start-ups from working with certain chips. This is especially common with security focused devices.
- Development environments and toolchains are not openly available, making hobbyist, student and trainee learning harder.
- Errata and datasheet updates for security issues are extremely rare.
- Provide full datasheets without having to enter a commercial arrangement
- Encourage learning about the platform by providing a development environment freely
- Clearly publish errata and changes to datasheets, particularly with respect to vulnerabilities.
3.3.3. Threat modelling
Chips often contain dedicated security hardware, and libraries and application notes provide security functionality. It is exceptionally rare for a silicon vendor to honestly discuss the threat models that have been used to develop these chips, and how they can be applied to real products.
- Are secure memory elements designed to withstand sustained local attack?
- Are cryptographic primitives resistant to timing attacks?
- How long will the default key strengths resist attack for?
- Have the provided libraries been subject to static code analysis or formal code review?
- If the flash memory from a single device is recovered, can it impact all devices?
Device manufacturers are frequently found to perform little threat modelling, resulting in them protecting their device and system against threats that are unrealistic.
- Explicitly state the threat model that a device or software solution has been developed under.
- Provide guidance for suitable devices for given applications.
- Improve documentation to ensure that security functionality is fully used.
In the last ten years, the maker movement and hobby electronics have exploded. Hundreds of thousands of single-board computers such as the Raspberry Pi have been sold, alongside more conventional development boards such as the Arduino.
A large online community has flourished around these devices. Forums, blogs, tutorials, and even entire companies have been spawned from hobbyist hardware. This has brought about changes:
- The use of open-source software is expected
- Code sharing and associated re-use is common
- Developers feel able to ask questions and then receive appropriate answers
- Focus has shifted from low-level (bare metal, embedded, OS) to application development
- Developers can easily work with hardware with no previous experience
Whilst this has brought about many advantages, there are also downsides:
- Popular answers to questions may not result in good security
- Code that is heavily re-used is assumed to be secure
- Developers can quickly end up out of their depth
Whilst several semiconductor vendors have created forums, they are rarely successful or popular.
- Foster a community around products to assist developers to produce secure products
- Provide libraries and application notes to solve real-world problems
- Work with popular open-source projects to improve security
Over the course of many hardware security tests and workshops with developers, it has become clear that the majority of developers have no previous security training. Concepts such as buffer overflows, sanitising user input, and command injection are unfamiliar.
Further to this, there has been a strong shift away from dedicated embedded developers towards more general developers. This can mean a web developer (by training and experience) will be responsible for an IoT product. A lack of understanding of the hardware itself can lead to security problems:
- Conventional developers are used to frameworks and libraries that have built-in security. This is often not the case with embedded code.
- Moving from “full stack developer” on a web application to the same on an embedded system brings several additional layers that can take years to understand.
- IoT products present unique security challenges that are poorly documented and can have no clear solution.
- Hardware functionality present in devices (such as TRNGs and HSMs) remain unused due to a lack of understanding.
- Libraries, application notes and any other software should be secure by default.
- Documentation should be available at a level that can be read by a software engineer, not an electronic engineer.
- Security guidelines should be available to provide concise best-practice guidance for IoT
4. Example Issues
Some of the issues mentioned earlier in this document are either complex, or may appear specious without specific examples being given.
4.1.1. Inadequate Entropy
Most cryptographic systems will require a source of entropy (or random numbers), for the generation of private keys, session keys, and nonces.
If this entropy is not drawn from a suitably unpredictable source, an attacker can predict the “random” numbers, leading to the system becoming compromised.
Example: Arduino documentation recommends use of ADC for entropy
The common Arduino development board recommends that users seed the random number generator (RNG) using a floating ADC input. This several problems.
The RNG takes a 32bit seed, but the ADC is only 10bits. This means that we have seriously limited the search space for an attacker wishing to determine the deed.
A floating ADC doesn’t produce random values over the 10bit space. Typically, only 50 or so values are generated. The search space is now tiny.
Although the Arduino is often seen as a toy, the same method has been found in commercial products.
Example: Linux board produces same SSL keys
After factory-reset, a Linux based device generates a new SSL key for securing HTTPS communications. It does this as the board boots.
The default source of entropy in Linux are keyboard and mouse timings and hard disk access timing. In an embedded system with no user input and flash memory with highly consistent timings, there is inadequate entropy.
It was found that the board produced a duplicate key after being rebooted less than 10,000 times. Although this may seem insignificant, a large deployment of IoT devices could see this issue occur accidentally, and a determined attacker could find duplicate keys themselves.
Developers must be made aware of the risks of drawing entropy from predictable sources.
Many microprocessors lack a true/hardware random number generator. In devices such as these, it is possible to gather entropy from ADC inputs or thermal noise, but only after significant software processing. This should be provided as a library.
Nearly all connected systems will require entropy. Unfortunately, microprocessors with true/hardware random number generators tend to include significant additional security functionality and hence cost much more. Introducing chips with hardware entropy sources at low cost would be beneficial.
4.1.2. Inappropriate choice of operating system
A device manufacturer has a choice between different operating systems:
- A full-blown operating system such as Linux
- A real-time OS such as FreeRTOS or VxWorks
- Bare-metal operation, without a formal OS
The choice of operating system is often found to be inappropriate, meaning:
- The OS is too complex, increasing the attack surface and making it more difficult to keep the device updated and secure.
- The OS is too simple and cannot support modern security protocols or the workload required to be secure.
Example: Low power bare-metal device has knock-on impact
An energy monitor connects via 802.15.4 to the Smart Meter, gathers energy consumption readings, and sends them via HTTPS to a server, allowing customers to view data online. The microprocessor uses is PIC24 series, with the default Microchip TCP/IP stack. This only supports SSLv2 and SSLv3 with known insecure ciphers.
For the server to communicate with these devices, OpenSSL has had to be patched to allow them to connect. This will have to occur throughout the lifetime of the devices. Custom patched software is prone to security issues, either directly from the patch or because it cannot receive automated updates.
Beyond this, there is a risk that the encrypted data could be decrypted by an attacker due to the weak ciphers. The likelihood of this happening is small, but must be considered.
Example: Over-complex Linux device is insecure and unreliable
An IoT thermostat is based on an open-source development board, and runs a build of the Linux operating system. Communications are using HTTPS, secured well as OpenSSL is available in full.
However, the device is found to be running multiple services that are not required: telnet, SSH, and a web server. The system has not been minimised as the developer does not understand embedded Linux well enough to do so. Further, firmware updates take over 10 minutes and are prone to failure as providing dual-banked flash for such a big firmware image was prohibitively costly.
Vendors should work with different operating system providers to give their customers a choice of potential operating systems. It is essential that these choices can be kept up-to-date for the foreseen lifetime of the product.
The relative advantages and disadvantages of each choice should be documented.
4.1.3. Avoidance of secure per-device programming/secrets
Nearly all devices will require some form of identification and key material to be stored internally. This allows them to be identified and authenticated against the cloud or server system.
Some devices generate new key material during factory reset, but as a result, they cannot be authenticated as genuine devices.
Generating, programming, and storing sensitive key material in a factory environment is challenging from a security viewpoint, and increases production costs.
Example: MAC address used to generate key material
A connected device required a unique key to connect to the cloud system. To be secure, this key had to be at least 128bits in length. The costs for programming unique key material were found to be prohibitive.
Rather than hard-code the key, a key-derivation function was used. This took the MAC address of the device, with 24bits of entropy, and extended it to 128bits. The keys looked random, but an attacker could easily find all of them by brute-force, or by sniffing a single packet on the local network.
Example: Machine used to program device riddled with malware
A device manufacturer has contracted a factory to produce devices. As part of the process, each device is flashed with a firmware over JTAG, using a custom utility running on a PC. This generates a per-device certificate, signed by a certificate authority. The CA certificate is stored on the PC.
On visiting the factory to investigate unrelated quality issues, the PC running the utility is found to be compromised with malware. The CA certificate could have been leaked, allowing anyone to produce valid per-device certificates.
Vendors have proposed several solutions to deal with this issue, but none are widely adopted.
Dedicated crypto-chips with key material burned into them are available. These can be used to identify and authenticate devices. The additional cost can be prohibitive.
Broadly speaking, cryptography issues can be divided into the following groups:
- Design issues
- Library implementation issues
- Application implementation issues
- Configuration issues
One of the biggest issues found in embedded systems are custom encryption algorithms and systems. This is commonly termed “roll your own cryptography”. Sometimes these are encryption algorithms built from the ground up, other times they are systems implemented using secure cryptographic primitives in an insecure manner.
Example: SSL/TLS without authentication of endpoints
Encrypting data between two endpoints is largely futile if it is not possible for either party to determine if the other is authentic. A large part of SSL/TLS concerns the use of certificates to allow servers and clients to identify themselves using a chain of trust.
Many libraries supporting SSL/TLS will establish an encrypted communication channel without fully validating the authenticity of the endpoints. For example, some libraries do not check that the server name in the certificate matches the expected name.
Because of this, it is often possible to intercept SSL communications from embedded devices.
Example: Weak encryption in bootloaders
Bootloaders are often limited in size and must run quickly to provide a good experience. A secure bootloader must load the firmware image, check the signature, and then decrypt the firmware.
It can be challenging fitting the required cryptography into this small space. Taking existing solutions using general purpose libraries such as openssl often gives solutions that are both too large and slow. Even embedded SSL implementations such as WolfSSL can be too big.
As a result, developers either implement their own cryptography, use insecure cryptography (such as XXTEA), or miss out secure bootloader functionality entirely.
Silicon vendors should provide cryptography libraries that are targeted towards embedded systems, and these should have ready-made solutions for the majority of common applications.