Tuesday, June 20, 2017

amazon EBS breaches - from report to leak

I first read this article on the 5th of June. Yesterday, 19June, I read this one after a report from Upguard.

The original release does not explain in detail how the records were discoverable but there's no coincidences. The moment I read about it I thought about that first article about permissions and storing large sets of data in a cloud service while not caring for permissions. Expect more.

A few considerations
* this is mostly Americans, but had it happened in the EU and by next year, the party would be bankrupt [memo to self: does GDPR apply to political parties?]

* the same way a hard disk is just a tool to store information, it sits passively on the security chain. A hard disk has no notion of permissions. It is the OS and controls around it (such as encryption) that secures it against unauthorised access. Nobody, unless very technical, should have access to the hardware directly. A cloud storage service works the same way. Directly manipulating data so sensitive (even ethnicity and religion, the worst possible kind of leak) should never be done in bulk as it is half way to loose track of it. The proven way of handling personal data is by understanding how it flows and, at each processing point, who has access to it and where is lies.

Saturday, May 6, 2017

FuturePets.com breach -- PCI compliance

A few weeks ago a new major breach was known. It turns out there was a server with a database of 100 000+ customer records that even included credit card numbers. Apparently, the server was sitting in plain sight with no protection whatsoever (no username or password).

This is just unbelievable. Whereas i am not surprised with personal information, the credit card numbers do puzzle me.

In order to offer payments, the company needs to comply with PCI/DSS. It comes in different levels and a major line is the number of transactions and whether the credit card numbers themselves are stored: 
  • 100k is enough for them to, in normal circumstances, have an external audit and regular pentesting to their external interfaces
  • storing credit cards is MAJOR. It draws the line between a really simple compliance programme and a really complicated one. I am simplifying but this is the main reason why receipts have all the asterisks with only a few digits of the credit showing. The rule is: process the payment and forget the credit card details.
How was it possible that a server with this information was just accessible without even a password?

Two explanations. One is they lied, at multiple locations, in their self-assessment and cheated when audited.

The other is far simpler: this is forgotten server that was used for, e.g., backups.

PCI security has been doing wonders for security for years now. However, it is very prescriptive and can more or less easily be cheated and made quite inconsistent, unlike ISO 27001, for example, that focus on the management of security.

Again, I praise PCI for what they have done but one should always keep in mind the programme is a trade-off between simplicity and adhesion. If the programme was too complex, the vast majority of companies (small) would not dare accept payments. So it sort of follows a 80/20 rule similar to what the UK is doing with its 10-step programme: better to have "lots" of security, even if not comprehensive than requiring everything and the moon and nobody doing it.

Friday, December 30, 2016

the UK and GDPR

There were already strong indications that the UK would fully adopt GDPR even after Brexit. A recent document (21-dec) has however come out that further clears any doubts especially considering Britain is halfway the door.

This document is penned by the Dept of Culture, Media and Sports but still this says it all. It starts by saying
Government will (...) improve cyber risk management (...) through its implementation of  (...) GDPR.

 Then it closes any discussion with

For now, Government will not seek to pursue further general cyber security regulation for the wider economy over and above the GDPR.
Which, in a sense, is a remarkable statement since GDPR is not, strictly speaking, about cyber security. It embeds and enforces cyber security but from a narrow angle of Personal Information.

Tuesday, October 18, 2016

GDPR and ransomware

Even if Brexit comes to change everything, a lot will pretty much stay the same and the EU's GDPR (or just google it), that everybody will have to comply with by 2018, is just a good example.

I am not a legalist so I am missing a few things for sure. What I am not missing is the cyber security impact of GDPR. Now that it has been aproved, all sorts of implications are coming out.

One of them is about ransomware and the PCI council makes an interesting remark: if your company gets hit by ransomware, and you were not prepared (e.g., good backups and protected/offline backups) you will probably be advised to pay up because at least you have some chances of getting documents back. The trouble is ransomware will likely to be the least of your costs: the GDPR plans to fine businesses up to 4% of revenues upon a breach if your company fails to properly explain there was nothing, within reason, you could have done.

Interesting is that cyber security for GDPR is in effect inefficient if the idea is to specifically address compliance. My recommendation would be this: implement a broad cyber security programme on information security and then specifically trim and adjust it, via special gap and risk assessments, t GDPR.

Blockchain.info incident - very good incident response

Amongst others, Blockchain.info provides online bitcoin wallets which makes it an atractive target. Last week, something happened, fairly outside its reach, but that had an exemplary response even if it was taking down the whole service for a day. Cannot blame blockchain.info and I applaud

News say the registrar (whois results here) was breached and blockchain.info can only accept the risk.

Attackers somehow managed to change the DNS records of the domain. The trick, I presume, is to redirect users to a specially crafted website with the same design and trick the users into typing in credentials.
Interestingly, a user on reddit posted an alert just 1h or so after it happened. I asked how he found out and he told me that he spotted it by accident when his application (that pulls data from blockchain.info) was reporting errors. Most likely, the errors and warnings were due to the self-signed certificates the attacker was using. Firefox and Chrome will be very zelous about this and request a multi-click authorisation from the user.

Practical highlights:
  • even if your budget is low, monitor your security. There is a lot one can do with off-the-shelf tools or open-srouce or - even - with good ol'scripting.
  • have an incident response plan, document and distribute to everyone. If resources are lacking just do what blockchain.info did: pull the plug as soon something happens that looks minimally serious. Then investigate, document and use it to add more controls, technical or process, to your cybersec framework..
  • once more, it shows that HTTPS helps in more than one way. Buy certificates from a well-known CA and use it generously. The main browsers will cooperate - and often stop it there by alerting the user - and are intensely pushing a https-only web.
  • if possible, making sense and being practical, insist that your users adopt 2FA. In this case, it would have stopped breaches if combined, for example, with a single-active-session policy on blockchain.info's servers. 
  • interestingly, if you have software-based service on which others rely, take advantage of your users as all have the incentive. Leverage this crowd-sourced tool and implement an action in a new guideline. This user on reddit seems to have caught really fast and probably faster than a SIEM could have caught especially if you do not monitor that side of your infrastructure.

Monday, August 29, 2016

Why Saving £500/month Ashley Madison's Blew It

The Ashley Madison's incident is still a source of very interesting news. The dating website for organised shady affairs (no judgement whatsoever) went to an Australian court and the hearing is public. This was the first time I saw a court do a risk assessment.

The report shows that Ashley Madison failed on its obligations to provide protection for the user data that, needless to explain, was highly personal.

It does not surprise me to known that they shared passwords on a Google drive or there was no multi-factor authentication when accessing their systems remotely such as from a public location. Overall, security at 80% is about good technical controls that do not need really a Cyber Security Office. They do need, however, at some point, guidance from a security professional.

The only thing I have to say is that I am sure in Canada they would find someone that, for some £500/mo, would be able to act as the interim CISO and stop them from having really bad practices.

This is basically my offer to many small companies or charities: let me spend 3~6 months in your company, do a nice gap assessment, align a report with some relevant framework and have the company implement and get started with Cybser Security. From then on, it's a matter of keeping it going with its own resources and some part-time steering. Yu get even a one-man SOC that will lookour for obvious signs of compromise and issue alerts on major vulnerabilities and mitigation actions. Hence the round figure of £500/month.

80% of security, in my opinion, is low hanging fruit, especially when the company has so many technical resources easy to train and discipline. Good policies and guidelines is half way there.
It is not enough to get ISO27k certified, but it is certainly enough to cover all the gaps the report details and much more.

Sunday, August 21, 2016

How Secure Software Development fits in a company

This is an interesting article about the importance of Secure Software Development. The title says it all: "To Protect Enterprise Data, Secure the Code".

I could not agree more; however, I am not I agree with the solutions advanced. It is suggested, above all, that Developers should take on the burden. Moreover, the article sort of suggests that securing the code is central to securing the enterprise.

I have been recently speaking to a company to a highly innovative company in the UK whose main product is Software based. In a nutshell, they develop all-software contact centre solutions around which companies can keep in touch with customers over a plethora of channels. As such, secure code is of paramount importance.

Question is, this is not enough and, more than suggesting a 4-fold approach to a cyber security programme, I would never put the burdon on secure coding on the devleopers.

The 4 work packages I have suggested shoudl be obvious:
  • the internal organisation operations
  • the product delivery plan -- for the case of installing their products in the client but having no further responsibility over operations after a signoff
  • secure software development lifecycle (S-SDLC)
  • secure hosted operations or SaaS
 This together does provide 360-degress cyber security. There are a lot of overlaps and synergies so it does not necessarily mean 4 independent programmes.

As for the role of developers, they should keep doing what they are doing now -- but the whole project delivery needs to be adjusted. Quite critical is to engage the developers in Risk Assessments and Table-Top Exercises before and after implmentation. Engaging with 3rd parties (e.g., pestesting) or having a non-developer (the CTO if needed) to do code sanitisation are key steps that do not need to interfere with the normal, and quite personal, development style.

Overall, most of implementing a Secure-SDLC can be done around the current practices and free-style of each developer. Key steps to achieve this is by inserting extra steps in their release plan -- so to prevent design/implementation flaws --, engaging with 3rd party services, designing policies that enable the cyber security office to vet a release with adequate checks and reserving some resources for software maintenance at the security level.

My message being: do not drop the onus of security on the developers: let tehm freely work, add security as just another well-defined/deliverable requirement and build the rest around them so they can focus on the key functionality.

Saturday, April 23, 2016

Shapeshift hack (a Bitcoin service)


Shapeshift.io is a startup revolving around Bitcoin (one of my lateral interests and a movement I follow quite closely). Last week they reported a coins having been stolen. More than that, Eric Voorhees writes a fascinating report of how it happened. It is a story I will be using in many talks

My first reaction, shared in a reddit post, is that they actually didn't do anything fundamentally wrong. They're a startup so getting the business up and running is the goal. This means they have no cybersecurity office and, worst of all, they are all tech people which unfortunately gives a stronger sense of "we don't need a cybersecurity programme because we have firewalls". I have been working with tech startups with an immensely skilled army of developers and managers but that show quite an alarming unawareness of many basic concepts of cybersecurity.

As I often say, cybersecurity is 20% about firewalls and 80% about organisational processes. In ths case, what failed was the human element:
  • do not leave computers unlocked
  • do extensive background checks on new starts 
But who has never left a laptop open and logged in? I keep doing it even on public places. And how thorough and reassuring can background checks be?

There are, of course, many tactical improvements possible: secured critical operations, segregation and air gaps of critical assets, much clearer/crisper separation of duties, much much better auditing, much much much better accounting, etc. Beyond making the system harder to exploit, above all they would mak eit easier to understand what happened and much faster.

Sharing the story, with care not to reveal too much, was a good thing to do in my opinion. I do have a few unanswered questions but I also feel it was an honest report. It assured customers: everyone will be hacked at some point and cybersecurity is mostly about minimising damage to the least (reasonable) extent when it happens and not preventing it.

The fact that their source code is on the lose is alarming though. They should subject it to a thorough analysis (by a 3rd party!) and setup a bug bounty programme. They are a business that relies on exposure to the public internet and I can only imagine how many people are trying to exploit it.

Finally, in the words of Eric Vorhees, there is also this valuable lesson so well formulated:
Though it sounds cliché, (...), do yourself a favor and bring in 3rd party professional help very early. We hadn’t needed it at first, because we were small. But growth creeps up on you, and before you know it you are securing significant assets with sub-standard methods.

Monday, December 28, 2015

IoT -- when cyber security is there but is wrong (RSI Videofid)

Kudos to RSI Videofied, a French company, making surveillance devices, for thinking of cyber security (was there an option) and actually executing the plan. Problem is: they'd be better off with no security or masquerade security, when you look like you have but not really -- more of a social engineering step to create disincentives to script kiddies.

So it turns out the implementation is utterly broken. It remnds me of the "firewall" fallacy I hear so often: "but we have firewalls, we're safe".

Again, cybersec has many faces and aspects. It is a machine with multiple moving parts. Each adds a bit on its own but the real value is when the machine is fully working and all parts are moving. On top of that, technical controls, such as choice of encryption, can be complex and hard to get right. It seems to have been the case.

Friday, September 18, 2015

Wireless comms in Oil & Gas -- why not wireless

The Network Engineer in me is all in favour of wireless. One application this article missed is Subsea Oil&Gas where the rover at deployment phase, or maintenance, needs to connect locally to the equipment.

In the ocean, this is not easy trick. There are many reasons but two stand out: you just do not plug in an ethernet cable and, if wireless, you need to use acoustic comms which have very limited bandwidth at just 1 metre away.

So wireless feels just like magic.

Problem is the Cyber Security Engineer in me does not feel comfortable with wireless at a plan. Do I need to explain why?

Whereas in the ocean it is easier to close my eyes (fish is not the typically hacker or disgruntled employee), on the Topside, like Master Control Stations or at Business Networks, Wireless should really be avoided at all costs.

Do I really need to explain why?

Wireless comms in Oil & Gas -- designing Industrial Networks

Interesting article on Oilpro.com and the kickoff is "When it comes to wellsite optimization, communications systems are an area often overlooked by producers."It is, and when it is not, they bring IT people full of IT-alike certifications that get very confused when they see Cisco barely has an industrial grade router. When they go back to basics and look for other vendors they get even more confused and can barely create a VLAN.

Do not misunderstand me: I have full respect for IT/ICT people. The problem is it's different. You do not plan, design, deploy, test, etc., an Industrial/Embedded/IoT network the same way you do in IT/Enterprise/Telecom. Different tools, different requirements, different design workflows, etc.

It's just different.

...which also reminds me I have long promised a post comparing Networks (and Cyber Security, for what is worth) in different domains and aplications: Military, Industrial, Internet-of-Things, Telecom, Office/Enterprise, Financial, Datacentre, etc.

Tuesday, September 15, 2015

The Importance of Logging

I just found this document from Google with some practical considerations for Logging. Very IT or Software Development oriented, but very useful to have at arm's reach.

Why is Logging so important? Many reasons of which some are
  • you will get compromised -- so be prepared to understand how it happened
  • you will get compromised -- so be prepared to have forensic material
  • you will only realise and know you got compromised after you got compromised. Sometimes it takes months. Strange entries in logs is often how you first suspected something was going on.
On top of this, logs give very interesting insight and data to further operational intelligence and analytics. They can even be a source of business on its own.

Finally, Logging and Monitoring come hand in hand. Whereas Monitoring a Network or a Plant is more of an immediate type (alarms, green/red lights, instant indicators), trending and long-term time series analysis is effectively a child of Logging. Integration is the key word.

The detail to which Logging should be set, how to consolidate different sources and formats, how to time-sync sources for consistency, how to tie Logging with Auditing and upper-levels Reporting, etc., can be left for later -- if you must -- but the basis must be there.

Save budget, resources and effort to think about Logging right fro the beginning.

Saturday, September 5, 2015

How to securely distribute security patches (not?)

Chrysler was some time ago in the centre of the news after a ridiculously easy hack was demonstrated that allows an attacker to completely take over a vehicle, including crashing it with a push of a button.

Props to Chrysler for reacting fast(-ish) (as if they had an option given the Safety rating of the vulnerability) but a company like that should know better that sending a USB stick over the post is not the best way of doing it and is opening a precedent: anything that in the future arrives in the post and looks legit will be installed by enough car owners.

Then again, what are the options? Not many that serves well a heterogeneous group of people (1.4m vehicles). Recalling all vehicles was probably less efficient and too costly.

I do see, however, they sending blank USB sticks over with instructions to download & install the patch to it from a website with login credentials sent by post and the usb stick having further means to verify the patch -- along with a bold note advising never to insert a usb disk under no other circumstances. People unable to follow the procedure would be redirected to a Chrysler garage.

What am I missing? You're welcome to send comments on blogcomments-roundything-vitorjesus.com

Sunday, August 30, 2015

OT is not IT -- but convergence of course

....convergence of vendors, technologies, design processes, V&V, etc

This post at LightReading seems to point that, similarly to a decade ago, OT is converging to IT -- or, at least, classical Telecom vendors are now seeing the opportunity.

I cannot stress enough that ICS/OT is not IT, the same way Telecom is not IT or datacentres are not Telecom. They all have different requirements, different application of (the same) technologies, different vendors, different requirements (e.g., "CIA" versus "AIC"), different communications and cybersecurity architectures.

I am working on a table that highlights the differences from a Networking perspective.

Saturday, August 1, 2015

Can Industrial Cyber Security be outsourced?

Yes -- most of it, at least, and after a custom programme has been concluded.

A programme includes adequate awareness training of all employees, signed-off cybersecurity policies and quality processes, device certification/hardening, reference architectures/guidelines/checklists, etc.

The third party that takes on cybersecurity tasks can make everything very transparent, the same way EHS becomes fairly invisible once all signs and procedures are in place. Work happens in the background.

What is needed is a sharply identified person, or people, that, after receiving a bit more than a simple awareness course, will be engaged and interface internal/external actions upon new projects and/or changes.