Iterating on an ID in a short URL sent via SMS made it possible to collect personal data and belonging shipment information.
Who: | Bring |
Severity level: | Low to medium |
Reported: | August 2020 |
Reception and handling: | Fair |
Status: | Fixed |
Reward: | Nothing at first, a thank you in the end |
Issue: | Leak of personal information - perfect for phishing |
Bring is a part of the Norwegian postal service aimed at the business market.
I ordered a freezer from the consumer electronics retailer Elkjรธp. Before delivery I got an SMS from Bring containing a link to the shipment information. The web address in the SMS immediately caught my attention. The address was so short that I knew that I would be able to iterate through shipments. And to my surprise the destination web page displayed personal data.
The SMS I got contained a link to the shipment information.
At first look the page didn't seem to contain anything special for my package. There was a zoomed out map and information about the delivery time.
Then I saw that there was a page with my name, address and phone number.
At last there was a page with the latest events on the package's trip.
The link with the shipment information was https://glow.bring.com/track/969986d2-3403-42ff-ae12-1c06868bb0f4. That address contains a long unique unguessable identifier. An identifier like that is normally considered an OK way of concealing that kind of personal data - at least if it's for a limited time.
The problem was the link in the text message. The link was https://s.bring.se/XRP3. This was the key to the castle. ๐๏ธ ๐ฐ
Looking at the part XRP3
from the link I would guess that all addresses would be in the range [A-Z0-9]
. Codes like XRP2
, XRP1
, XRP0
, XRO9
etc. seemed to give valid redirects to other URLs containing personal data shipment information.
I only tested a few to get an idea if this was a problem or not. Not all codes I tested gave a result. I believe almost all of them gave a redirect to a UUID, but some of them gave a "page not found" message. I don't know how many persons were affected by this, or what time range we are talking about, but I did see past, present, and future deliveries.
Using the short link sent in customer SMSes it was possible to iterate through and collect the following information:
It was possible to enter a gate code that could be used when doing the delivery. Hopefully no codes were available. I didn't see any (in my very small dataset). Unfortunately I didn't test to enter any value on my own delivery. What I did see in the data returned by the backend API were keys/values like these: "lockCodeOrdered", "eventType": "lock-code-entered", "gateCode"
It's important to emphasize that this wasn't an issue harming all or most of Bring's customers or recipients. My understanding is that this issue applied to a subset of recipients of shipments sent using the platform called Glow - used within the service Bring Express. I have no idea of the volume here. It looks like it affected people in Norway ๐ณ๐ด, Denmark ๐ฉ๐ฐ, Sweden ๐ธ๐ช and Finland ๐ซ๐ฎ.
Wow, either I can't spell and/or read or that was a bad tip I got from @bringnorge. pic.twitter.com/DBU9gIhdbe
— Roy Solberg (@roysolberg) August 12, 2020
As seen so many times it was really hard to find somewhere to report a security issue. Bring did not have security.txt, there was no response on Twitter, and their chat bot certainly was of no help. When I got in touch with a real human being I got a non-existing email address. The customer service had mixed up .no
and .com
.
Two days later I got a response from a customer advisor asking if this was a real shipment meant for me. I confirmed that one of the shipments was indeed for me.
I got a reply telling me that I was supposed to get all this personal information. So everything was fine. I replied begging that the customer advisor would forward the email to someone who was responsible for IT security or the IT department itself. They would understand.
I never heard back from the customer advisor. I had no clue what happened.
Luckily - when I was a bit upset by their customer advisor on the third day - they gave me a questionnaire asking how I much I enjoyed their service. I replied that it was impossible to report a security issue. Someone had actually read the questionnaire and acted on the information I gave. I got an email asking for information about the issue I had found.
I forwarded the email thread I had with their customer service. I got a reply that they had looked into it and the customer advisor had actually forwarded my email on the third day, and they had fixed the issue. I was also told that they would send a notification to the IT security department and Data Protection Officer (DPO). They would also use the case internally for educational and improvement purposes. ๐
There were maybe especially two ways to misuse this information leak:
Very often I see that Google has indexed pages that contains personal information. I want to do a post on Google dorking, but every time I do some test searches I find security vulnerabilities and information leaks and that makes me postpone it. Luckily I couldn't see any personal information indexed by Google in this case, but my guess is that that is more of a coincident, because doing a search like https://www.google.com/search?q=site:bring.com+inurl:track&filter=0 shows that Google has been indexing pages that GoogleBot has come across in one way or another. As a developer you need to remember to Google your sites, and ensure to exclude the right paths in robots.txt + add the HTTP response header X-Robots-Tag: noindex
(or meta tag <meta name="robots" content="noindex">
).
Companies: Please train customer service to handle someone reporting a security issue. And please offer some information on where and how to report an issue.
All of us: Be aware when you get that phone, SMS or email regarding some shipment supposedly on the way. Are links going to legitimate domains? Are they asking for personal information? Are they asking for money? Is the grammar and spelling correct?
]]>The Norwegian Police Security Service (with the Norwegian abbreviation PST which I'll use for this write-up) is the police security agency of Norway. They have by now made it a tradition to have Capture the Flag (CTF) style job ads and competitions. This is the fifth time I'm covering them here: 1, 2, 3, 4.
PST added an N in front of their name and created the imaginary Northern Polar Security Service (Nordpolar sikkerhetstjeneste = NPST). NPST's role is supposedly to protect Santa Claus, his elfs and Christmas itself. Like last year, PST posted a fake job ad where they said to be looking for elf officers (="alvebetjent") for a temporary position to help out NPST. A few news outlets wrote about it as well (TV 2, Avisa Oslo). Everything went down on npst.no from December 1st to December 24th.
This time PST reused an acronym from a internship job ad challenge they had earlier this year: DASS - Digitale Arkiv- og SaksbehandlingsSystem. Dass "happens" to be a slang word for toilet. ๐ฝ
New this year were Easter eggs(!) that worked as extra hidden flags that had to be found to get full score.
Let's jump straight to it. Click on a challenge to expand it.
I'm glad the scoreboard was kept open until January 5th. By that time 35 users had managed to get all 240 points, and 29 of those had all 11 eggs. There were 1481 users that successfully submitted at least 1 correct flag, though I suppose there's quite a few non-real users among them.
๐ It's been yet another great CTF by PST. It's really cool that they do these. I'm sure it's helpful for recruitement. And personally I'm glad more people learn more about anything IT security related.
โค๏ธ This year CTFd was been replaced by a beautiful retro Windows 95 like user interface. Except for missing solve time per challenge I loved this year's user interface.
๐โโ๏ธ I'm so happy the challenges were published at 7 a.m. instead of at midnight like last year. I didn't have many minutes to spare at 7 in the morning, but at least I didn't stay up too long and had the brain working overtime while it's supposed to sleep.
๐ต What I didn't enjoy was the number of challenges. 24 main challenges + 11 eggs. If you spend on average 1 hour on each you have spent 1 work week through those 24 days. That is just way too much if you have a job, school, exams or a family. The CTF experts of course solve every single challenge in less than 1 hour, but most of us aren't there. I would like to see the workload be cut in half.
๐ There were no days without a new challenge. I couldn't work on the calendar every day. That meant that by the end of the calendar I was several days behind. I was constantly chasing to get even, but I never made it. That was stressful. There should probably be 2 days without anything new every week. If the workload is this big next year I don't think I will prioritze to take part of it.
๐ฅ The concept with the Easter eggs is ok, but I didn't like how there were eggs which you had no idea where existed. It meant that the moment you were done with the challenges you had to go egg hunting - without any idea where to look. Especially egg #1 with humans.txt
was torture, and probably also egg #3 (cupcake.png
) for some people. It added what felt like more stress than fun.
๐ณ For me it also felt a bit strange that it was the last solve time - regardless of if it was a regular challenge or an egg - that decided your placement on the scoreboard. I would have thought the main challenges were more important than the eggs. Solving eggs wasn't something extra, it was absolutely necessary to be near the top.
๐ท I loved the assembly language SLEDE8 and its great tooling. However, while the e-learning was necessary to get us ready for bigger tasks there were too many algorithms to be implemented. If I want to do algorithms I will find another type of competition. And again, the eggs of all the algorithms didn't give a good feeling. You had finally created an algorithm that worked and got the flag, only to get a message back telling that it isn't good enough to get the egg.
๐ I hope PST will keep having the storyline with NPST and SPST. It's a nice touch that makes the CTF unique and playful.
๐ค Oh, and for those really competing for the first place I think it's important to never change the time of day of releases of challenges. Yes, I'm thinking about the final day that was released the night before. It was a good thing to do, but it should have been announced.
๐คฉ Anyways, except the workload, it's just all minor stuff, because I really love PST's CTFs. I'm looking forward to the next one! ๐
If you have thoughts about my solutions, the CTF, or if I have missed something cool; don't hesitate to comment here or ccontact me in some other way. ๐
]]>The Norwegian Police Security Service (with the Norwegian abbreviation PST which I'll use for this write-up) is the police security agency of Norway. Following up on their big Christmas CTF advent calendar they now did a smaller CTF for Easter.
This time PST added an H in their name and created the imaginary Easter Bunny's Security Service (Pรฅskeharens sikkerhetstjeneste = PHST). PHST's role is supposedly to protect the Easter Bunny's egg basket and core values. Everything happened on phst.no from April 9th to April 13th 2020.
Let's jump straight in. Click on a challenge to expand it.
By the end of the day of second Easter day 19 people had full score and a total of 24 had solved all the challenges. I was pretty happy being the fourth person to solve the last challenge and therefore placing me on fourth place in total.
I really enjoyed these Easter puzzles. They were entertaining, varied and they didn't need that big of a time investment. Maybe the best part was that the challenges were opened daily at 12 p.m. instead of 12 a.m., so there was no need to lose any sleep or check the mobile in the middle of the night.
]]>The Norwegian Police Security Service (with the Norwegian abbreviation PST which I'll use for this write-up) is the police security agency of Norway. Once in a while they have job ads with some more or less hidden challenges - almost Capture the Flag (CTF) style. I've done posts on them back in January and in October 2019. This time they went all in with a CTF advent calendar.
PST added an N in front of their name and created the imaginary Northern Polar Security Service (Nordpolar sikkerhetstjeneste = NPST). NPST's role is supposedly to protect Santa Claus, his elfs and Christmas itself. PST posted a fake job ad where they said to be looking for elf officers (="alvebetjent") for a temporary position to help out NPST. Of course big and small media sites took notice and published stories on this (NRK, TV 2, VG, Politiforum). Everything went down on npst.no from December 1st to December 24th.
Let's jump straight in. Click on a challenge to expand it.
While I worked on all the challenges alone there were a few times I was stuck long enough to discuss or ask for hints from Frank Karlstrรธm (Twitter, blog). I'm pretty sure I wouldn't have nailed all challenges otherwise.
The CTF closed by the end of New Year's Eve. Looking at the scoreboard there were 19 users who managed to get all 274 points. 39 users managed to solve challenge 24 (18 by the end of Christmas Eve). The "problem" that stopped many from full score was challenge 23 which neeed 3 hints from PST before it was solved. There were 1048 users that successfully submitted at least 1 correct flag, though I suppose there's quite a few non-real users among them. Still, there have been several hundred persons trying out the challenges. I think that's pretty good.
This was actually my very first CTF. I don't think the intention was to have a very beginner friendly CTF, but at the same time the challenges generally weren't very hard. I think it's really cool that PST did this. This and previous job ads is a great way of showing off some of the expertise they are looking for. They sure get a lot of both media attention and awareness in the IT industry. I'm pretty sure this can - at least in the long run - help them hire the right people.
I wish that the challenges weren't published at midnight. I mean, it's fine if it was PST's way to map out who's got no commitments, no family, no job, no school. Otherwise it was a bit harsh with 19 out of 24 days like this. I'm happy that I was able to stay away most midnights, but it was hard not to check out the new challenge on the mobile when waking up in the middle of the night. Personally I'd like the challenges to be published like 6 p.m. I'm not sure if I'd give away that many hours of next December.
I saw some minor critisim of the hints being suddenly published instead without having the release time published up front. I think that is a valid point for those really competing in staying and ending on the top of the scoreboard.
It feels good that it's over now. You know your brain is working overtime when you immediately start looking for clues when you see toy penguins at the store, or you have dreams where you are trying to figure out the charset of some binary data.. ๐
All in all, I think it was a great and very fun CTF. I think it was entertaining with the theme and storyline. I'm impressed by PST and I hope they continue with this and similar things in the future. It's good for them and it's good for all of us if we can increase the expertise in our industry.
If you have thoughts about my solutions, the CTF, or if I have missed something cool; don't hesitate to comment here or contact me in some other way. ๐
]]>The Norwegian Police Security Service (with the Norwegian abbreviation PST which I'll use for this write-up) is the police security agency of Norway. Once in a while they have job ads with some more or less hidden challenges - almost Capture the Flag style.
In December 2018 they posted a job listing were they included a riddle that they wanted people to solve. I didn't hear of the job posting until it suddenly in January was on the front page of most Norwegian newspapers. I published my version of the solution just after the application deadline. The solution was also translated to Norwegian and published at kode24.no - a site for Norwegian developers.
PST themselves approved the walk-through:
The solution to PSTs shark riddle advert perfectly explained by @roysolberg https://t.co/Fsejj3NWgi @LeoDiCaprio
โ PST (@PSTnorge) January 15, 2019#hungryshark
pic.twitter.com/1qNImmhQms
Now, I didn't expect to do another one of these walk-throughs this soon, but all of the sudden the interesting Twitter handleThings I did not expect: Seeing my own JavaScript code live on the TV channel @NRKno a Saturday night. Achievement unlocked. ๐ CC: @kode24no @PSTnorge pic.twitter.com/FIN0afWgYW
— Roy Solberg (@roysolberg) February 2, 2019
twitt3rhai
(=Twitter shark) tweeted what might appear as random characters. Twitt3rhai
was part of the challenge in December. Of course I had to try to figure what the tweet was all about..
Luckily this turned out to be less of a challenge than the previous one.
It all starts with PST's job ad where they want both digital forensics specialists and a system developer. The text in the job advertisement doesn't hint about any challenges as I could see, but there is an interesting header image:
It's got all the nerdy details you'd want: A pcap (packet capture) hat, lotsa computer screens, some file dump or something, some code, and more.
While the image doesn't have the world's highest resolution you can see that the person in the image (possibly nicknamed HackerMan) has got the following Python 3 code file named encrypt\.py
open:
#!/usr/bin/python3 from base64 import b64encode from Crypto.Cipher import AES from Crypto.Util.Padding import pad, unpad def get_primes(count): primes = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61] return(primes[0:count]) with open('plain.txt', 'rb') as f: plaintext = f.read() iv = "" for i in get_primes(16): iv += chr(plaintext[i + 16]) key = b'\xba\xda\x55 HackerMan \x13\x37' # <----- DESTROY AFTER USE cipher = AES.new(key, AES.MODE_CBC, IV=iv) ciphertext_b64 = b64encode(cipher.encrypt(pad(plaintext, AES.block_size))).decode('utf-8') print(ciphertext_b64) #TODO: automate posting of ciphertext to twitter.com/twitt3rhai
Now, this is some interesting code. Let's follow the flow of it:
plain.txt
(aka unencrypted information) into a variable plaintext
.iv
by using 16 characters from plaintext
offset by 16 + a prime number.key
(nerd bonus for using the hex numbers BA, DA, 55, 13, 37) which is indicated that should be deleted later on.key
variable as key and the iv
variable as initialization vector.twitt3rhai
. (And what do you know, the same Twitter handle is also in that header image.)Our ultimate goal seems to be to get the plaintext to see what it says. So lets start by heading over to Twitter:
This does indeed seem like it could be some Base64 encoded text and can be assumed to be the ciphertext. Now we've got quite a few pieces of the puzzle./lb0WZDpaIDJVJwy+Q04LCqERqVj7AUItWGREJuXJeWtZN77yP6grehn1gRif31hjTEjLNFyxESweea81/QluWUyhZV9vmabm8NYkkSc6JJWuylGJKQJzA/wC2cM2ScrQQ8gV7GcnVyBCh7eq/N0jUm/L4xrX6IUIDi5CAkVZ9xSS5Tb4o01onOTbGWLd1EZwzZOMlq88wsTPZ6zY7dqj+LKq3Pj6SKlZfaR9eo6PXrRUOARCe9sQVtWVKc5DJfI
— twitt3rhai (@twitt3rhai) September 20, 2019
So, how can we decrypt the ciphertext in the tweet? Let's just tweak the encrypt.py
to make our own decrypt.py
:
#!/usr/bin/python3 from base64 import b64encode, b64decode from Crypto.Cipher import AES from Crypto.Util.Padding import pad, unpad def get_primes(count): primes = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61] return(primes[0:count]) # TODO: automate reading of ciphertext from twitter.com/twitt3rhai ;) ciphertext_b64 = '/lb0WZDpaIDJVJwy+Q04LCqERqVj7AUItWGREJuXJeWtZN77yP6grehn1gRif31hjTEjLNFyxESweea81/QluWUyhZV9vmabm8NYkkSc6JJWuylGJKQJzA/wC2cM2ScrQQ8gV7GcnVyBCh7eq/N0jUm/L4xrX6IUIDi5CAkVZ9xSS5Tb4o01onOTbGWLd1EZwzZOMlq88wsTPZ6zY7dqj+LKq3Pj6SKlZfaR9eo6PXrRUOARCe9sQVtWVKc5DJfI' ciphertext = b64decode(ciphertext_b64.encode('utf-8')) iv = " " * 16 # Must be 16 bytes key = b'\xba\xda\x55 HackerMan \x13\x37' cipher = AES.new(key, AES.MODE_CBC, IV=bytes(iv, 'utf-8')) plaintext = unpad(cipher.decrypt(ciphertext), AES.block_size) print(plaintext) # Output: b'\x15\x01Tx)5,d%,*<&:u%erer! Du klarte det! Beklager, men denne gangen har vi ikke laget flere oppgaver. H\xc3\xa5per du vil s\xc3\xb8ke jobben. Hvis du blir ansatt kan vi love deg mange utfordrende oppgaver.'
Here's what the script does:
The output is the following:
b'\x15\x01Tx)5,d%,*<&:u%erer! Du klarte det! Beklager, men denne gangen har vi ikke laget flere oppgaver. H\xc3\xa5per du vil s\xc3\xb8ke jobben. Hvis du blir ansatt kan vi love deg mange utfordrende oppgaver.'
When I did the challenge I was at first happy with getting most of the plaintext. And I was thinking that the first part missing probably was the start of the word "Gratulerer" (=congratulations).
But what is an "initialization vector" and why is it used here? AES (Advanced Encryption Standard) is a block cipher, meaning that the algorithm is operating on a fixed-length groups of bits (blocks). To avoid equal plaintext blocks to become equal ciphertext blocks many of the "modes of operation" of the encryption algrithms use some part of the previous block part of the input to the following one. The mode used here is Cipher Block Chaining (CBC). In this mode each block of plaintext is XORed with the previous ciphertext block before being encrypted. To produce distinct ciphertexts even if the same plaintext is encrypted multiple times - and to protect the first block - there must be some be some unique input to the first block; an initialization vector.
Looking at the output there are actually 16 bytes that are "garbage" (the AES' block size). That means that the first word can't just be "Gratulerer".
But how can we get hold of that initialization vector? The answer lies in the code. It is generated from the plaintext - and luckily for us it only uses plaintext above character number 16, meaning that we have everyhting we need. So let's tweak the script:
#!/usr/bin/python3 from base64 import b64encode, b64decode from Crypto.Cipher import AES from Crypto.Util.Padding import pad, unpad def get_primes(count): primes = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61] return(primes[0:count]) # TODO: automate reading of ciphertext from twitter.com/twitt3rhai ;) ciphertext_b64 = '/lb0WZDpaIDJVJwy+Q04LCqERqVj7AUItWGREJuXJeWtZN77yP6grehn1gRif31hjTEjLNFyxESweea81/QluWUyhZV9vmabm8NYkkSc6JJWuylGJKQJzA/wC2cM2ScrQQ8gV7GcnVyBCh7eq/N0jUm/L4xrX6IUIDi5CAkVZ9xSS5Tb4o01onOTbGWLd1EZwzZOMlq88wsTPZ6zY7dqj+LKq3Pj6SKlZfaR9eo6PXrRUOARCe9sQVtWVKc5DJfI' ciphertext = b64decode(ciphertext_b64.encode('utf-8')) iv = " " * 16 # Must be 16 bytes key = b'\xba\xda\x55 HackerMan \x13\x37' cipher = AES.new(key, AES.MODE_CBC, IV=bytes(iv, 'utf-8')) plaintext = unpad(cipher.decrypt(ciphertext), AES.block_size) iv = "" for i in get_primes(16): iv += chr(plaintext[i + 16]) cipher = AES.new(key, AES.MODE_CBC, IV=bytes(iv, 'utf-8')) plaintext = unpad(cipher.decrypt(ciphertext), AES.block_size).decode('utf-8') print(iv) # Output: er uate!k,mngn i print(plaintext) # Output: PST-haien gratulerer! Du klarte det! Beklager, men denne gangen har vi ikke laget flere oppgaver. Hรฅper du vil sรธke jobben. Hvis du blir ansatt kan vi love deg mange utfordrende oppgaver.
The script now does the decryption in two rounds; first without knowing the init vector, and then again with it correctly initialized.
Success!
The solution to the challenge is this:
PST-haien gratulerer! Du klarte det! Beklager, men denne gangen har vi ikke laget flere oppgaver. Hรฅper du vil sรธke jobben. Hvis du blir ansatt kan vi love deg mange utfordrende oppgaver.
The PST shark congratulates you! You made it! Sorry, but this time we haven't created any more challenges. Hope you will apply for the job. If you get hired we can promise you many challenging assignments.
Many - maybe most - of the smart meter integrations I've seen have been focusing on showing a lot of data in some kind of dashboard. While that's powerful and often useful I want to have all the data transformed into meaningful information.
I think Google Assistant on Google Home and phones can be a great platform for that. You can get the important information extracted and query the data with human language.
It's not very straight forward to make "actions" (apps) for the Google Assistant, but the documentation is pretty good so it isn't hard to at least get started. It's impossible for me to describe the whole process here, but I'll give an overview.
I have made a few actions so far; a simple way of getting the latest Premier League team news from FotMob, the game hangman, and a voice-controlled bank (to be published in September). This experience means that it's relatively quick for me to create an action for this purpose.
If you want to create something for yourself and you haven't done anything with the Google Assistant platform before you really should get an overview and in this case especially of the custom conversational actions. Also knowing a bit of conversation design will help you make better user experiences.
The first step is to create a new project at the Actions on Google console. The project type you want is conversional. Then you only have to add some basic information about the action, like the name you want to use.
Dialogflow is a Google-owned platform for Natural-language understanding (NLU) and it provides great tools creating the conversational user interfaces for the action.
Most of the Dialogflow part is about intents. Intents can be seen as commands that the user can give to the action. "How much power am I consuming now?" and "What is the electricity price?" would be routed to two different intents.
You'll be able to train the Dialog agent by giving it some example phrases it should recognize. With the combination of NLU and machine learning it'll magically understand the intention of the user.
The intents I started out with were these:
It wasn't a coincidence that I in the previous post showed how to query the data using Cloud Functions. That function can now be tweaked to handle the Dialogflow intents and return a response to the user.
Here's how I route the different intents Dialogflow have figured out that the user meant:
const functions = require('firebase-functions'); const admin = require('firebase-admin'); const { dialogflow } = require('actions-on-google'); admin.initializeApp(functions.config().firebase); const app = dialogflow({ debug: false }); app.intent('Default Welcome Intent', (conv) => require('./intent_welcome')(conv)); app.intent('Current usage', (conv) => require('./intent_current_usage')(conv)); app.intent('Usage', (conv, { time }) => require('./intent_usage')(conv, time)); app.intent('Estimate', (conv) => require('./intent_estimate')(conv)); app.intent('Price', (conv, { time }) => require('./intent_price')(conv, time)); exports.dialogflowFulfillment = functions.https.onRequest(app);
Also notice how Dialogflow have parsed the time
parameter. The time can be unspecified, a specific timestamp or a time period. For for example the Usage
intent, I take the parameter and query the database for the usage for the specified time.
I utilize different response types depending on the intent. I use a little bit of SSML (Speech Synthesis Markup Language) and if the user has a screen available I often show graphs for details. I landed on using the imagecharts service for creating the charts I wanted to display.
And that's really it. That's how I've made my Google Assistant app.
Actions can push notifications to users. I would like to be notified by Google Assistant whenever the power consumption is a bit too high or if the electricity becomes expensive.
Living in a smart home I have the current temperature for both outside and every room in my house. I also have a few lux sensors which will tell if the sun is shining or not. It would be interesting to add some of that data when uploading the power usage and price information.
Earlier when I lived in a house built in 1981 I had a mechanical power meter in the kitchen that was connected to the power cabinet. It showed the current power consumption. With just a glance you always knew if you were consuming too much energy. There was no need for the Internet, Ebay parts, Raspberry Pi, Python, Firebase, Google Home, etc. It sure was simpler times...
]]>In Norway we have to pay for both the electricity usage itself and a network tariff. The latter has two components; a fixed price and a cost per kWh. The price is different from power company to power company.
At the time of writing the price for me was as follows:
Tariff (๐ณ๐ด: Nettleie) | |
- Fixed component (Fastledd) | NOK 2,615 per year |
- Energy component (Energiledd) | NOK 0.4202 per kWh |
Electricity price | NOK 0.4663 per kWh |
I didn't find a publicly available free API for the electricity spot price, so I used some power company website's Ajax call to get the updated price.
Using requests
, the Python script I have that uploads the usage also checks for the current price with a call to this REST endpoint every hour.
I haven't looked that hard, but I haven't seen any APIs at all for the tariff at all. But being set once a year it doesn't take much time or energy to set it manually.
As I'm pretty familiar with Google's Firebase tech stack I landed on using the NoSQL Cloud Firestore for storing the usage data.
To my surprise I would have to recompile Python on my Raspberry Pi to be able to use the firebase-admin
client library directly in my code. I didn't want to do that, so I decided to use Cloud Functions for an endpoint that could receive the data from my Raspberry Pi.
I've tweaked the code a bit here, but generally this is what the function does. Remember to add some level of authentication to not leave it wide open.
const functions = require('firebase-functions'); const admin = require('firebase-admin'); admin.initializeApp(functions.config().firebase); exports.registerReading = functions.https.onRequest((request, response) => { try { let incomingReadings = request.body; let db = admin.firestore(); let dbReadings = []; for (const incomingReading of incomingReadings) { if (isValid(incomingReading)) { let dbReading = { "meterId": incomingReading["meterId"], "parseTimeUtc": incomingReading["parseTimeUtc"], "dataTimeLocal": incomingReading["dataTimeLocal"], "meterType": incomingReading["meterType"], "activePower+": incomingReading["activePower+"], "activePower-": incomingReading["activePower-"], "l1Voltage": incomingReading["l1Voltage"], "l2Voltage": incomingReading["l2Voltage"], "l3Voltage": incomingReading["l3Voltage"], "price": incomingReading["price"], "priceTimeUtc": incomingReading["priceTimeUtc"], "raw": incomingReading["raw"] // The raw bytes coming from the meter } var collectionId = incomingReading.meterId; // We use the meter id as the collection ID if (incomingReading["activeEnergy+"] || incomingReading["activeEnergy-"]) { // Hourly reading dbReading["activeEnergy+"] = incomingReading["activeEnergy+"]; dbReading["activeEnergy-"] = incomingReading["activeEnergy-"]; collectionId += '-energy'; // We use a different collection for when having the hourly energy readings } let docRef = db.collection(collectionId).doc(); docRef.set(dbReading); dbReadings.push(dbReading); } else { // [...error handling...] } } } catch (e) { // [...error handling...] } // [...] });
I soon discovered that uploading the usage every 10 second blew the daily free Functions quota of 5,000 function invocations. So I changed to uploading two readings in one go - ending up on ~4,300 invocations a day.
Now that the data is safely stored in Firestore it's ready to be queried and presented in any way desired. For me it was easiest to Cloud Functions for reading the data back out as well.
Here's a very basic quick and dirty example of doing some querying of the data.
const functions = require('firebase-functions'); const admin = require('firebase-admin'); admin.initializeApp(functions.config().firebase); exports.getReadings = functions.https.onRequest((request, response) => { try { let db = admin.firestore(); var meterRef = db.collection('2200567223197714-energy'); var responseStr = "Today's first reading: \n"; var queryRef = meterRef.where('dataTimeLocal', '>=', '2019-07-29T00:00:00') .where('dataTimeLocal', '<', '2019-07-30T00:00:00') .orderBy('dataTimeLocal') .limit(1); queryRef.get().then(snapshot => { if (snapshot.empty) { responseStr += 'No matches\n'; } else { snapshot.forEach(doc => { responseStr += JSON.stringify(doc.data(), null, 4) + '\n'; }); } return true; }).then(() => { responseStr += "Today's latest reading: \n"; queryRef = meterRef.where('dataTimeLocal', '>=', '2019-07-29T00:00:00') .where('dataTimeLocal', '<', '2019-07-30T00:00:00') .orderBy('dataTimeLocal', 'desc') .limit(1); queryRef.get() .then(snapshot => { if (snapshot.empty) { responseStr += 'No matches\n'; } else { snapshot.forEach(doc => { responseStr += JSON.stringify(doc.data(), null, 4) + '\n'; }); } response.send(responseStr); return true; }) .catch(err => { responseStr += 'Error: ' + err + '\n'; response.send(responseStr); console.error(err.stack); }); return true; }).catch(err => { responseStr += 'Error: ' + err + '\n'; response.send(responseStr); console.error(err.stack); }); } catch (e) { responseStr += 'Error: ' + err + '\n'; response.send(responseStr); console.error(err.stack); } });
So now we have of reading, decoding, storing and querying the power usage data. That's all nice and well, but it doesn't give much value in itself. In the next post I'll show the end product - how I chose to present the data in what I felt was a meaningful and valuable way. For me that is the most interesting and fun part.
]]>What's interesting is that the new smart meters all come with a so-called HAN port (short for Home Area Network). Using that port it's possible to get full access to your own electricity usage realtime. While I'm sure great services (and APIs) for using this data will be provided by both my energy company and third party vendors I didn't want to sit back and wait. (You can today get access to a bit of delayed hourly usage if you log in to https://plugin.elhub.no/. They also have some nice Ajax calls which are easy to understand and tweak.)
By default the physical HAN ports of the smart meters are closed off and not sending any data. All you need to do is to contact customer support at your power company and they'll quickly open it remotely.
My power company - Norgesnett - used almost a month to open it up as they said the newly installed meter had to first be registered in some computer system. Of course I also noticed a pretty bad security vulnerability while at it.
The smart meters use the M-Bus standard for the physical data transfer. So to read the data stream you need to get some kind of M-Bus converter. The smart meters act as a so-called master and the receiver must be a slave. The master gives enough power to run a slave.
From what I read from the forums a lot of people are using - successfully - this (or similar) M-Bus to USB master/slave found on AliExpress, but for me and others it didn't work. I received shorter packages than expected and only the first part of it was readable for me.
Not being an electrical engineer and not knowing how to debug or resolve this I just threw money at the problem and bought another converter.
Another commonly used M-Bus to USB master/slave from eBay (the exact same product from the same seller no longer exists, but it looks like this and can be found doing a search) did the trick for me. I connected it to my good old Raspberry Pi (Model B Rev 2). The HAN port in the smart meters has a RJ-45 connector with the signal being transmitted on pin 1 + 2. So I just used an old network cable to connect the smart meter and converter.
Python is not my mother tongue, but it's a language I really like and enjoy writing. It's almost always available on whatever system you're on, and the standard library is pretty extensive. Do a simple pip install pyserial
and you're ready to read data from the USB port.
The serial port settings for the data stream for me was 2400 baud, parity bit none and byte size 8 bits.
So doing something similar to this I got the raw data stream (code works in both Python version 2.7 and 3.4):
import serial import codecs import sys ser = serial.Serial( port='/dev/ttyUSB0', baudrate=2400, parity=serial.PARITY_NONE, stopbits=serial.STOPBITS_ONE, bytesize=serial.EIGHTBITS, timeout=4) print("Connected to: " + ser.portstr) while True: bytes = ser.read(1024) if bytes: print('Got %d bytes:' % len(bytes)) bytes = ('%02x' % int(codecs.encode(bytes, 'hex'), 16)).upper() bytes = ' '.join(bytes[i:i+2] for i in range(0, len(bytes), 2)) print(bytes) else: print('Got nothing')
It would output something similar to this (I have anonymized the data a bit):
Connected to: /dev/ttyUSB0 Got 228 bytes: 7E A0 E2 2B 21 13 23 9A E6 E7 00 0F 00 00 00 00 0C 07 E3 06 12 02 14 2F 32 FF 80 00 80 02 19 0A 0E 4B 61 6D 73 74 72 75 70 5F 56 30 30 30 31 09 06 01 01 00 00 05 FF 0A 10 32 32 30 30 35 36 37 32 32 33 31 39 37 37 31 34 09 06 01 01 60 01 01 FF 0A 12 36 38 34 31 31 33 31 42 4E 32 34 33 31 30 31 30 34 30 09 06 01 01 01 07 00 FF 06 00 00 06 A7 09 06 01 01 02 07 00 FF 06 00 00 00 00 09 06 01 01 03 07 00 FF 06 00 00 00 00 09 06 01 01 04 07 00 FF 06 00 00 01 E0 09 06 01 01 1F 07 00 FF 06 00 00 00 88 09 06 01 01 33 07 00 FF 06 00 00 02 36 09 06 01 01 47 07 00 FF 06 00 00 00 6D 09 06 01 01 20 07 00 FF 12 00 EB 09 06 01 01 34 07 00 FF 12 00 EB 09 06 01 01 48 07 00 FF 12 00 EB 83 77 7E
So what are those bytes coming from the HAN port? They are following the DLMS (Device Language Message Specification) protocol and are sent inside HDLC frames and contains OBIS (Object Identification System) codes that describes the electricity usage. Everything is part of IEC 62056 which is a set of standards for electricity metering data exchange.
How often the messages arrives varies from one meter vendor to another. The same goes for the actual format of the messages. I don't know if there are any other vendors, but at least Aidon, Kaifa and Kamstrup have made smart meters for the Norwegian market, and they all provide documentation for their own OBIS messages.
On my Kamstrup meter I get the current power used every 10 second + the total kWh usage every hour.
To really understand the HDLC and OBIS codes you need dig into different sources around the Internet, but the Norwegian forums at hjemmeautomasjon.no is a great source of information. There are so many knowledgeable people sharing their work and helping each other out.
I assume I haven't got everything figured out and there are likely some errors, but this is my interpretation of the message:
Header: 7E <-- Frame start flag A <-- 4 bits, A = 0b1010 = frame format type 3 0E2 <-- 1 bit, segmentation bit + 11 bits, frame length sub-field, 0xE2 = 226 bytes (excluding opening and closing frame flags) 2B <-- Destination address, 1 bit, 0b1 = unicast + 6 bit, node address, 0b010101 = 21 + 1 bit, address size, 0b1 = 1 byte 21 <-- Source address, 1 bit, 0b1 = unicast + 6bit, node address, 0b010000 = 16 + 1 bit, address size, 0b1 = 1 byte 13 <-- Control field 23 9A <-- Header check sequence (HCS) field, CRC-16/X-25 Information: E6 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- Destination LSAP E7 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- Source LSAP, LSB = 0b1 = command 00 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- LLC Quality 0F ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- LLC Service Data Unit 00 00 00 00 <-- "Long-Invoke-Id-And-Priority"? 0C <-- string length?, 0C = 12 07 E3 <-- Full year, 0x07E3 = 2019 06 <-- Month, June 12 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- Day of month, 0x12 = 18 02 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- Day of week, Tuesday 14 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- Hour of day, 0x14 = 20 2F ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- Minute of hour, 0x2F = 47 32 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- Second of minute, 0x32 = 50 FF ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- Hundredths of second, 0xFF = not specified 80 00 <-- Deviation (offset from UTC), 0x8000 = not specified 80 <-- Clock status, 0x80 = 0b10000000, MSB 1 = summer time 02 <-- struct 19 <-- 0x19 = 25 elements 0A <-- visible-string 0E <-- string length 0x0E = 14 bytes 4B 61 6D 73 74 72 75 70 5F 56 30 30 30 31 <-- OBIS List Version Identifier, Kamstrup_V0001 09 <-- octet-string 06 <-- string length, 0x06 = 6 bytes 01 01 00 00 05 FF <-- OBIS for Meter ID, 1.1.0.0.5.255 0A <-- visible-string 10 <-- string length, 10 = 16 bytes 32 32 30 30 35 36 37 32 32 33 31 39 37 37 31 34 <-- Meter ID, altered 09 <-- octet-string 06 <-- string length, 0x06 = 6 bytes 01 01 60 01 01 FF <-- OBIS for meter type, 1.1.96.1.1.255 0A <-- visible-string 12 <-- string lenth, 0x12 = 18 bytes 36 38 34 31 31 33 31 42 4E 32 34 33 31 30 31 30 34 30 <-- Meter type, 6841131BN243101040) 09 <-- octet-string 06 <-- string length, 0x06 = 6 bytes 01 01 01 07 00 FF ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- OBIS for Active Power +, 1.1.1.7.0.255 06 <-- unsigned, 4 bytes 00 00 06 A7 <-- 0x06A7 = 1703 Watt 09 <-- octet-string 06 <-- string length, 0x06 = 6 bytes 01 01 02 07 00 FF ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- OBIS for Active Power -, 1.1.2.7.0.255 06 <-- unsigned, 4 bytes 00 00 00 00 <-- 0 Watt 09 <-- octet-string 06 <-- string length, 0x06 = 6 bytes 01 01 03 07 00 FF ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- OBIS for Reactive Power +, 1.1.3.7.0.255 06 <-- unsigned, 4 bytes 00 00 00 00 <-- 0 Watt 09 <-- octet-string 06 <-- string length, 0x06 = 6 bytes 01 01 04 07 00 FF ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- OBIS for Reactive Power -, 1.1.4.7.0.255 06 <-- unsigned, 4 bytes 00 00 01 E0 <-- 0x01E0 = 480 Watt 09 <-- octet-string 06 <-- string length, 0x06 = 6 bytes ย ย ย 01 01 1F 07 00 FF ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- OBIS for L1 Current, 1.1.31.7.0.255 06 <-- unsigned, 4 bytes 00 00 00 88 <-- 1.36 Ampere 09 <-- octet-string 06 <-- string length, 0x06 = 6 bytes ย ย ย 01 01 33 07 00 FF ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- OBIS for L2 Current, 1.1.51.7.0.255 06 <-- unsigned, 4 bytes 00 00 02 36 <-- 5.66 Ampere 09 <-- octet-string 06 <-- string length, 0x06 = 6 bytes ย ย ย 01 01 47 07 00 FF ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- OBIS for L3 Current, 1.1.71.7.0.255 06 <-- unsigned, 4 bytes 00 00 00 6D <-- 1.09 Ampere 09 <-- octet-string 06 <-- string length, 0x06 = 6 bytes 01 01 20 07 00 FF ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- OBIS for L1 Voltage, 1.1.32.7.0.255 12 <-- unsigned, 2 bytes 00 EB <-- 235 Volt 09 <-- octet-string 06 <-- string length, 0x06 = 6 bytes ย ย 01 01 34 07 00 FF ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- OBIS for L2 Voltage, 1.1.52.7.0.255 12 <-- unsigned, 2 bytes 00 EB <-- 235 Volt 09 <-- octet-string 06 <-- string length, 0x06 = 6 bytes 01 01 48 07 00 FFย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- OBIS for L3 Voltage, 1.1.72.7.0.255 12 <-- unsigned, 2 bytes 00 EB <-- 235 Volt End: 83 77 <-- Frame check sequence (FCS) field, CRC-16/X-25, altered 7E <-- Frame end flag
For me the most interesting part of this message is the OBIS for Active Power + (1.1.1.7.0.255) which tells how much power - in Watt - that is currently being used. If you have a house that produces electricity and exports it to the grid (e.g. if you have solar cells) the exported power would appear as the OBIS for Active Power - (1.1.2.7.0.255).
The message appearing hourly is similar to the one that comes every 10 second, but contains a bit more information:
Connected to: /dev/ttyUSB0 Got 302 bytes: 7E A1 2C 2B 21 13 FC 04 E6 E7 00 0F 00 00 00 00 0C 07 E3 07 09 02 14 00 05 FF 80 00 80 02 23 0A 0E 4B 61 6D 73 74 72 75 70 5F 56 30 30 30 31 09 06 01 01 00 00 05 FF 0A 10 32 32 30 30 35 36 37 32 32 33 31 39 37 37 31 34 09 06 01 01 60 01 01 FF 0A 12 36 38 34 31 31 33 31 42 4E 32 34 33 31 30 31 30 34 30 09 06 01 01 01 07 00 FF 06 00 00 01 6C 09 06 01 01 02 07 00 FF 06 00 00 00 00 09 06 01 01 03 07 00 FF 06 00 00 00 00 09 06 01 01 04 07 00 FF 06 00 00 01 42 09 06 01 01 1F 07 00 FF 06 00 00 00 85 09 06 01 01 33 07 00 FF 06 00 00 00 5C 09 06 01 01 47 07 00 FF 06 00 00 00 3F 09 06 01 01 20 07 00 FF 12 00 EB 09 06 01 01 34 07 00 FF 12 00 EB 09 06 01 01 48 07 00 FF 12 00 EB 09 06 00 01 01 00 00 FF 09 0C 07 E3 07 09 02 14 00 05 FF 80 00 80 09 06 01 01 01 08 00 FF 06 00 38 DE 2A 09 06 01 01 02 08 00 FF 06 00 00 00 00 09 06 01 01 03 08 00 FF 06 00 00 00 1F 09 06 01 01 04 08 00 FF 06 00 09 00 85 83 77 7E
I've left out the identical parts of the message:
Header: [...same as first message...] Information: [...same as first message...] 02 <-- struct 23 <-- 0x23 = 35 elements [...] 4B 61 6D 73 74 72 75 70 5F 56 30 30 30 31 <-- OBIS List Version Identifier, Kamstrup_V0001 [...] 01 01 00 00 05 FF <-- OBIS for Meter ID, 1.1.0.0.5.255 [...] 01 01 60 01 01 FF <-- OBIS for meter type, 1.1.96.1.1.255 [...] 01 01 01 07 00 FF ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- OBIS for Active Power +, 1.1.1.7.0.255 [...] 01 01 02 07 00 FF ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- OBIS for Active Power -, 1.1.2.7.0.255 [...] 01 01 03 07 00 FF ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- OBIS for Reactive Power +, 1.1.3.7.0.255 [...] 01 01 04 07 00 FF ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- OBIS for Reactive Power -, 1.1.4.7.0.255 [...] ย ย ย 01 01 1F 07 00 FF ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- OBIS for L1 Current, 1.1.31.7.0.255 [...] ย ย ย 01 01 33 07 00 FF ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- OBIS for L2 Current, 1.1.51.7.0.255 [...] 01 01 47 07 00 FF ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- OBIS for L3 Current, 1.1.71.7.0.255 [...] 01 01 20 07 00 FF ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- OBIS for L1 Voltage, 1.1.32.7.0.255 [...] ย ย 01 01 34 07 00 FF ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- OBIS for L2 Voltage, 1.1.52.7.0.255 [...] 01 01 48 07 00 FFย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- OBIS for L3 Voltage, 1.1.72.7.0.255 [...] 09 <-- octet-string 06 <-- string length, 0x06 = 6 bytes 00 01 01 00 00 FFย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- OBIS for Real Time Clock (RTC), 0.1.1.0.0.255 09 <-- octet-string 0C <-- string length, 0x0C = 12 bytes 07 E3 <-- Full year, 0x07E3 = 2019 07 <-- Month, July 09 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- Day of month, 9 02 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- Day of week, Tuesday 14 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- Hour of day, 0x14 = 20 00 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- Minute of hour, 0 05 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- Second of minute, 5 FF ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- Hundredths of second, 0xFF = not specified 80 00 <-- Deviation (offset from UTC), 0x8000 = not specified 80 <-- Clock status, 0x80 = 0b10000000, MSB 1 = summer time 09 <-- octet-string 06 <-- string length, 0x06 = 6 bytes 01 01 01 08 00 FF ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- OBIS for Active energy A+, 1.1.1.8.0.255 06 <-- unsigned, 4 bytes 00 38 DE 2A <-- 0x38DE2A = 3,726,890 = 37,268.90 kWh 09 <-- octet-string 06 <-- string length, 0x06 = 6 bytes 01 01 02 08 00 FF ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- OBIS for Active energy A-, 1.1.2.8.0.255 06 <-- unsigned, 4 bytes 00 00 00 00 <-- 0 kWh 09 <-- octet-string 06 <-- string length, 0x06 = 6 bytes 01 01 03 08 00 FF ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- OBIS for Reactive energy R+, 1.1.3.8.0.255 06 <-- unsigned, 4 bytes 00 00 00 1F <-- 0x1F = 31 = 0.31 kWh? 09 <-- octet-string 06 <-- string length, 0x06 = 6 bytes 01 01 04 08 00 FF ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <-- OBIS for Reactive energy R-, 1.1.4.8.0.255 06 <-- unsigned, 4 bytes 00 09 00 85 <-- 0x090085 = 589957 = 5,899.57 kWh? End: [...same as first message...]
The only part that I really care about is the Active energy A+ (OBIS code 1.1.1.8.0.255), which is total power usage - in kilowatt hour (kWh) - since the installation of the smart meter. Keeping track of this value one knows the hourly power consumption. This is the value you have to pay for. If you produce and exports power it would appear as Active energy A- (1.1.2.8.0.255).
Error detection is supported through cyclic redundancy check (CRC) in both the header and footer of the frame. In the start there is a header check sequence (HCS), and in the end there is a frame check sequence (FCS). The checksum algorithm used is the CRC-16/X-25. There are libraries for all kinds of programming languages implementing all sorts of checksum calculations. I have used the Python library crccheck which provides the class CrcX25
which takes care of this.
Here are some of the sources of information I've used for deciphering the messages. I'll leave them here for anyone wanting to dive deeper:
That's it. In this post I've shown how I connected to the HAN port of my smart meter, how I read the data and how to transform the byte arrays into meaningful information. In the next post I'll discuss how I store the data and calculate the price of the electricity usage.
]]>Personal information and documents from thousands of individuals were leaked in a government booking system.
Who: | Aktiv kommune (City of Bergen, City of Stavanger, City of ร lesund, Fjell municipality) |
Severity level: | Medium to High |
Reported: | March 2019 |
Reception and handling: | Very good |
Status: | Fixed |
Reward: | A thank you |
Issue: | Leak of personal information |
Bergen is one the most beautiful cities in Norway. And the City of Bergen offers this really cool "cabin" with a great view on its mount Flรธyen where families can spend a night for free. I was asked if we should try to book a night there. But when I saw the URL of the site the curious developer in me immediately got sidetracked...
The booking site is run by a system shared between the municipalities Bergen, Stavanger, ร lesund and Fjell. Aktiv kommune is some sort of collective site for this cooperation. The booking system is used by organizations and individuals to book all kinds of facilities and equipment like sport courts, venues, meeting rooms, music instruments, etc. There are thousands of such "resources" that can be booked.
I opened Vivaldi developer tools while browsing the site. There's a calendar on the site showing the availability of all a selected resource. The calendar data is loaded as JSON via Ajax. I took me like 30 seconds to see that the server returned way too much data - including names, phone numbers, e-mail addresses, Social Security numbers etc.
This just so common - the server returns some kind of serialized data structure that contains much more information than what is used for the user interface. This reminded me of the case where the garbage collection calendar app leaked personal data.
Days after I reported the issue I was still curious of the site would be safe to use when that issue was fixed. I filled out the application form and uploaded an attachment. The URL to the finished application contained a "secret" so that no one should be able to guess the URL to your application. Other than that the ID seemed to be an incremental integer. But did the URL to the attached document in the application contain some kind of secret? Guess what, it didn't. The URL to all documents uploaded by users was based on an incremental integer ID. One could systematically go through and download all the documents.
I just checked a few, but to my surprise and horror the documents included photos of ID cards, passports, family photos and e-mails. Now this was not the kind of data I wanted stored on my computer even though it was available openly on the Internet. Luckily I found another unprotected URL which "just" listed the file names of all the uploaded documents. This made it easier to document the vulnerability without actually downloading stuff. The file list contained "interesting names" like full names, images with identifiers clearly pointing back to Facebook, words like "passport", "visa", "e-mail", "rental agreement", "ticket" etc.
Among the available documents there were a few ID cards, passports, tickets, visa, family photos, contracts and e-mails.
For the mentioned cabin you could see who were to stay there a given night - including age group and gender of each family member.
According to the municipalities themselves there were leaked information about 3,142 individuals in City of Bergen, 628 in Fjell municipality and 16 in City of ร lesund. City of Stavanger seems to have "forgotten" to tell the number of persons of affected. I suppose that in addition there were documents and other information about organizations available.
As the booking system was used by several municipalities on different URLs I wasn't sure what would be the best contact point. I sent an e-mail to Norwegian National Security Authority's (NSM) NorCERT (Computer Emergency Response Team) and they said they could contact the right persons. The few times I have talked with NorCERT they have always been very helpful, responsive and effective.
Two days later I got responses from NorCERT, Aktiv kommune (City of Stavanger) and City of Bergen. A project manager from Aktiv kommune thanked me and told me that they had fixed the issue and reported it to The Norwegian Data Protection Authority (DPA) - Datatilsynet.
I noticed the other issue with documents being downloadable and at night reported that to Aktiv kommune and City of Bergen.
Some e-mailing back and forth and the issue was fixed. Then City of Bergen, City of Stavanger, City of ร lesund and Fjell municipality posted each own news article describing the issue.
The problem with the news articles posted by the municipalities was that they seemed geared towards the issue reported initially. They don't mention any of the leaked documents. No passports, no ID cards, no e-mails, no contracts, no nothing. I asked the City of Bergen about this, and that is actually the only e-mail they have not responded to.
The report from City of Bergen to the DPA is one of the most honest and best ones I have read. They mention 5 passports, but I believe that number to be incorrect. Yes, if you look quickly at the filenames (and assume equal filenames in a row are duplicates) you will see 5 files containing the word "pass". But the ID cards and passport I saw had other types of filenames. They also say the quality on the images were low. Well, that cannot be said about the ones I saw. In addition there were filenames with "pasaporte", "visa", "flights", "itinerary" ,"paszport", "ticket" and quite a few full names and what seems to be e-mails and contracts. I hope they will report that as well.
The report also doesn't say that someone externally reported the issue with the documents. And it doesn't say that this happened days after. The report starts out so honest, but then it becomes the questionable text that the DPA usually receives.
โ
The issues were fixed quickly
โ
The DPA was alerted
โ
The individuals affected were informed by e-mail
โ๏ธ Mostly open about number of persons affected
โ City of Bergen's report to DPA seems lacking
โ No mentioning of the document leak in news articles or e-mails
โ No mentioning anywhere that one could also see the gender and age groups of the people accommodated
As we all know by now, leaks like this happen constantly. This is why I started publishing the issues that I trip over when I'm online. We need more focus on IT security in IT education, in IT projects and in IT companies. And people should be cautious about what information is left where.
On a positive note, the handling of this issue on City of Bergen's hand was quite a few steps up from the last time they where in the media in regards of security issues.
]]>An e-commerce site had misconfigured their site which led to their backup of their entire site + database with all shopping and personal data to be available on the Internet. And you could find it with a simple Google search.
Who: | Anonymous, let's call them Acme5 |
Severity level: | High |
Reported: | September 2018 |
Reception and handling: | Very good |
Status: | Fixed |
Reward: | A thank you |
Issue: | Website backup and database backup accessible by a simple Google search |
Acme5 is a Norwegian physical specialist store that also have an online web store.
I briefly mentioned this case in the presentation I gave in October 2018 at the security conference Sikkerhetssymposiet, but I never got around to writing about it. I have been wanting to do one or more write-ups on Google dorking, that is, how to use Google to find security vulnerabilities. A good starting point for checking your own security is googling yourself. There are just endless and endless of vulnerabilities and secret stuff indexed by Google available for anyone using a simple Google search. While doing research for this kind of write-up I found the issue presented here.
intitle:"index of" intext:backup
. "Index of" in the title is used by at least the Apache web server when a displaying directory listing. "backup" is an interesting name to see in a directory listing.
Especially one of the search results caught my attention. I clicked it and was a bit like "can this really be what it looks like?" Could it be a honeypot? If I were to leave some fake data on the Internet I would leave it just like that.
I clicked the files and took a quick peek. This was the real deal.
The website backup contained the source code and configuration of the full site. I don't think Acme5 could have much more to leak. At least the passwords were hashed with individual salt defending against pre-computed rainbow table attacks, but having the database the hashes would still be open against dictionary attacks and making it easier to brute force them.
I was a bit amazed with this finding and considered a second if this was a case for Troy Hunt and his service Have I Been Pwned. However, I ended up at just contacting the web shop by e-mail.
1.5 hour later I received a reply from the IT company Acme5 was using, thanking me for alerting them and asking for confirmation that I had deleted the files. They claimed to have web server access logs pre-dating the 3 months old backups and that the files were only downloaded once. I confirmed that I no longer had the files. They said they would "take the appropriate action in accordance Acme5's GDPR routines". And that was it.
Now, I have no idea if they did follow up the incident, if they reported anything to the Data Protection Authority or not. Maybe they felt like they didn't have to since they claimed that no one else had accessed the files.
I have been in doubt if this is a case where the company should be named. I suppose this is the biggest leak where I haven't named the company. The reasons for not doing so are that Acme5 isn't that big, their IT vendor is a small company, and supposedly they can tell for sure that no one previously had accessed the data.
Technically it's incredible simple for a system administrator to do a mistake like this, but you just can't do it. (Sometimes you have to wonder if some leaks are intentional.)
As an IT company; please Google yourself. And please hire an external company to do penetration tests and regular security audits. And stay tuned for that blog post about Google hacking.
]]>I created a simple JavaScript space invaders like game using the browser's local storage as "canvas". You need to open your developer tools to be able to play it.
Control your player with your arrow keys or WASD.
I've run the game successfully on Chrome 72 @ Mac + Windows 10, Firefox 65 @ Mac + Linux Mint, Firefox X @ Windows 7, Vivaldi 2.3 @ Mac and Opera 58 on Mac + Windows 10.
It does not work on Microsoft Edge 42 @ Windows 10 as their DevTools doesn't auto-refresh any of their web storages. Internet Explorer 11 (at least on Windows 7) doesn't have a proper localStorage
viewer.
Are you kidding me? Because it could be done, because it's something I haven't seen before, and because programming is tons of fun. When I got the idea I had to finish it.
It is a bit annoying to play the game in Firefox's devtools as they have a (probably normally useful) feature of blinking every time the localStorage
is updated.
From somewhere completely elsewhere. I recently saw a Hacker News post about an 11 year old favicon game called Defender of the favicon. I was a bit annoyed that I didn't come up with that idea (either before 2008 or any of the 11 following years..), so I wanted to create something nerdy and fun I hadn't seen before. Pretty quickly I came up with the idea of using the browser developer tools as the game "screen". First I thought about running the game in the cookies, but the JavaScript API for handling cookies is almost non-existing. Using the localStorage
is so much simpler, and to my surprise all the browsers I first tested immediately updated the developer tools to show the contents of the local storage.
Yes! Just head over to my GitHub repository for the game. Please note that while my game code is licensed under MIT License, the sound and music assets have their own licenses.
Why don't you try out my bookmarklet game DOM II: JavaScript Hell? ๐
I hope that you will make Tetris or a car game for the browser devtools. That would be so cool. ๐
None of the developer tools I tried used a monospaced font for the localStorage
, making it really hard to make the game at all.
Emojis might not be of the same width, but they are almost of the same width, making it easier to make the game.
There's a whitespace character called ideographic space that are about the same width as many emojis. Without that I don't think I could have made this game.
Mozilla has a really nice description on how internationalize your keyboard controls so that WASD
magically becomes ZQSD
for people with AZERTY
keyboards. (Hello ๐ซ๐ท.)
The world isn't ready for ES6. This is no surprise I suppose. I started out going all classes, fields (ok, that's TC39), arrow functions, computed property names, various String functions, etc., but had to go back to good old days to make the code run everywhere.
It was possible to control what seems to be all Internet connected Mill heaters worldwide.
Who: | Mill |
Severity level: | Medium |
Reported: | October 2018 |
Reception and handling: | Very good |
Status: | Fixed |
Reward: | A thank you |
Issue: | It was possible to control all Internet connected heaters |
Mill is a company selling different heaters. If you've ever seen a heater that actually looks good it was probably a Mill heater. Some of the heaters can be controlled with their app via the Internet. They also have Wi-Fi socket product that can be connected to good old "dumb" heaters.
Please note that this was originally not my finding. A friend of mine have a few Mill heaters and let me access what was needed to check out this issue.
The first thing that surprised me was that the app connected to a hostname that belongs to a Chinese "IoT platform" (Mill is Norwegian). The IP seems to belong to a machine running Amazon Elastic Compute Cloud (EC2) in Germany.
The first thing that scared me was that they only used https for the authentication. All other communication was unencrypted.
During authentication the server gave the client an access token. The token seemed to be valid for a 24+ hours.
When receiving information and sending commands the app sent some headers to prove the authenticity. There was a signature based on a nonce, a timestamp, the user id and the authentication token. The timestamp and nonce was also sent in the request. There was one problem with the request headers. The exact same headers could be used again and again for both replay attacks and for any other different command or information retrieval. This was not the app you wanted to use from your sunbed on vacation while connected to the nearest open Wi-Fi network.
But then there was the real issue. I got an ID of an oven from my friend, with his blessing to try to adjust the temperature of it. And guess what, it was possible to do just that. First one could get the status of the oven and check if the oven was online, then one could change the status it - including setting the temperature.
The ovens were assigned an ID that seemed to be an incremental number. So once this issue was present it became a large scale one. It seemed like one could set the temperature of any and all online Mill ovens worldwide.
curl -H 'Content-Type: application/x-zc-object' \ -H 'X-Zc-Content-Length: 85' \ -H 'X-Zc-Major-Domain: seanywell' \ -H 'X-Zc-Sub-Domain: milltype' \ -H 'X-Zc-Timestamp: 1939713271' \ -H 'X-Zc-Timeout: 300' \ -H 'X-Zc-Nonce: [some nounce]' \ -H 'X-Zc-User-Id: [your own user id]' \ -H 'X-Zc-User-Signature: [sha1 of time params, nounce and auth token]' \ -H 'Host: [Mill server]' \ --data-binary '{"homeType":0,"timeZoneNum":"+02:00","deviceId":[some oven id],"value":28,"key":"holidayTemp"}' \ 'http://[Mill server]/millService/v1/changeDeviceInfo'
It was possible to change the status - aka set the temperature - of the ovens. What if someone had turned off the oven because they were e.g. temporarily storing something close to the oven, and then someone turned the oven to full via the Internet? Could that potentially cause a fire? Their user manuals specify minimum distances for the ovens and that they need to be kept away from flammable materials.
At night I sent an e-mail to both their specified contact and Play Store e-mail address.
The morning after, they responded thanking me for telling them about the issue and that they had started working on fixing it.
At night - after business hours - I got a response that they had a solution that they were running some final tests on. They also asked for my opinion on some of the changes they were going to do.
Not everything was fixed overnight at once, but they showed they were on the ball, taking it serious and fixed the worst parts first.
I live in a smart home, and I like the ease of being able to see the temperature of every room and control the heating from anywhere. I can understand why people use Mill's and other's solutions. And imagine having a cabin in the cold snowy mountains where you can adjust the heat so that it's pre-heated just before you arrive. It's perfect. On the other hand, I have so much respect for those not wanting to connect their lives or homes to the Internet, because it will fail at one or another point. It doesn't have to be a specific case like this, but we're also talking about privacy issues in regards of big companies and governments, and we're talking about surveillance from anything from burglars to jealous partners to governments.
This was yet another one of many, many incidents of IoT security failing. We must come up with up with some kind of labelling of IoT devices that can work as a statement that the vendors can use to say they have at least gone through some minimum efforts checklist and that they actively uses third-party companies to check their security. If some big companies in the industry get together and work out a simple framework for this we could start going down the right path. I don't think we have time to wait for laws and regulations around the world.
]]>The Norwegian Police Security Service (with the Norwegian abbreviation PST which I'll use for this write-up) is the police security agency of Norway. In December 2018 they posted a job listing seeking a "curious solution-oriented digital forensics specialist".
In the job listing they included a limerick which was a riddle that they wanted people to solve. I didn't hear of the job posting until it suddenly in January was on the front page of most Norwegian newspapers. The titles were typically something like "if you solve this puzzle the job could be yours".
Almost every part of the whole mystery had a shark theme. The background for that was a bit embarrassing episode from October 2018 where suddenly PST tweeted a picture of a shark. It was the 7-year-old child of one of the PST employees that had accidentally clicked the share button while playing the game Hungry Shark Evolution on his father's iPad. This of course sparked speculations of their Twitter account being hacked. (It was not an iPad with any classified information, but an iPad used for updating Twitter from home. (Their Twitter account is also closed for direct messages.))
Oh, and "hai" is the Norwegian word for "shark".
The limerick in the job posting was as follows:
En oktobermorgen fikk vi pulsen til รฅ รธke
var det en hackeR som hadde lykkes med forsรธkeT
men en HAI I en TWeeT
er ikke sรฆrlig 1337.
lรธser du gรฅtEn bรธr dU vurdere รฅ sรธke
The direct meaning of it the limerick doesn't matter itself, but there's a pattern in it:
En oktobermorgen fikk vi pulsen til รฅ รธke
var det en hackeR som hadde lykkes med forsรธkeT
men en HAI I en TWeeT
er ikke sรฆrlig 1337.
lรธser du gรฅtEn bรธr dU vurdere รฅ sรธke
The upper case letters are ERTHAIITWTEU
. As you might know, .eu is a top-level domain, and you might've noticed there's also one single period in the limerick. This makes it ERTHAIITWT.EU
. If you're quicker than me (and Norwegian) you might already see that the first part can be re-ordered to TWITTERHAI
(Twitter shark).
I didn't see the re-ordering opportunity, so I visited http://erthaiitwt.eu
. That site shows a hangman saying that the order is incorrect.
The right solution for the first step was to visit http://twitterhai.eu
.
http://twitterhai.eu
shows an image of a shark and gives another poem telling you to look around. Now, I would always look at the source code, but I saw that there was this strange spacing in the middle of the word vรฆre. This is a sign that there's something going on with the HTML source code.
Do you see the odd line breaks and what the first column says? Shark.html
I tend do look closer at robots.txt
and have even made a nice robots.txt linkifier bookmarklet to easily visit and open links from exactly that file. Visiting http://twitterhai.eu/robots.txt
you get a helpful hint saying Use the source, Luke!.
I'm used to files being lower case so I went straight to http://twitterhai.eu/shark.html
. There you get the helpful hint that Case matters. Watch your characters.
The solution for the second step was to visit http://twitterhai.eu/Shark.html
.
While I suppose some people stopped looking when they found http://twitterhai.eu/Shark.html
which gives a tip about taking the time to write a good job application. However, there's a HTML comment telling that they have more puzzles if you look more closely.
I couldn't see any other clues or directions to go other than the image from step 2. What made this one hard for me was that the tools I used to look for Exif metadata didn't reveal anything. As I understand it the solution wasn't hiding in Exif at all, but rather a (JPEG) file comment. There are online tools out there that gives much more than just Exif, and if you're on a Unix-like system you could use a variant of the file
command to get the info needed:
file 1337_shrk.jpg
1337_shrk.jpg: JPEG image data, JFIF standard 1.01, aspect ratio, density 72x72, segment length 16,
Exif Standard: [TIFF image data, big-endian, direntries=3, PhotometricIntepretation=RGB, orientation=upper-left],
comment: "/haitech_secure.html", baseline, precision 8, 851x514, frames 3
The solution for the second step was to visit http://twitterhai.eu/haitech_secure.html
.
http://twitterhai.eu/haitech_secure.html
contained some general text with some "tips" for when applying for a job. More importantly it contained a password field and a login button.
The source code revealed a client side validation of the password by first verifying the password and then using the password as a key to decipher a ciphertext.
The JavaScript doing the actual verification of the key was as follows:
password.charCodeAt(0) == 8 * 8 + 8 && password.charCodeAt(1) == Math.pow(9, 2) - 29 && password.charCodeAt(2) == Math.pow(10, 2) + Math.pow(3, 2) && password.charCodeAt(3) == password.charCodeAt(2) && password.substring(4, 5) == 3 && password.charCodeAt(5) == 7 * 17 - 5 && password.charCodeAt(6) == password.charCodeAt(0) && password.charCodeAt(7) == password.charCodeAt(1) && password.charCodeAt(8) == 0x69
Obviously the password was 9 characters long and it was just a matter of calculating each char. I just copied and pasted a bit and used the following quick and dirty JavaScript in the console to print out the password and run the login function which gave an alert box with the next clue.
var password = []; password[password.length] = 8 * 8 + 8; password[password.length] = Math.pow(9, 2) - 29; password[password.length] = Math.pow(10, 2) + Math.pow(3, 2); password[password.length] = password[password.length - 1]; password[password.length] = '3'.charCodeAt(0); password[password.length] = 7 * 17 - 5; password[password.length] = password[0]; password[password.length] = password[1]; password[password.length] = 0x69 for (var i = 0; i < password.length; i++) { password[i] = String.fromCharCode(password[i]); } password = password.join(''); document.getElementById("password").value = password; console.log(password); login();
The password needed to proceed was H4mm3rH4i
(hammerhead shark).
The alert box outputted the text Caesar synes at du skal ta turen hit: uggc://gjvggreunv.grpu/unv_gurer.ugzy meaning Caesar thinks you should go here: uggc://gjvggreunv.grpu/unv_gurer.ugzy.
Just by looking at the last part itself it seemed pretty obvious that this was a Caesar cipher. This had to be http://somethingsomething.html
. There are plenty of places to solve these kinds of code, but a quick JavaScript also does the trick:
var cipher = 'uggc://gjvggreunv.grpu/unv_gurer.ugzy'; var shift = cipher.charCodeAt(0) - 'h'.charCodeAt(); // Char we suspect to know var charDistance = 'a'.charCodeAt(); var cleartext = ''; for (var i = 0; i < cipher.length; i++) { var charCode = cipher.charCodeAt(i) - charDistance; if (charCode >= 0 && charCode <= 25) { // a-z cleartext += String.fromCharCode((((charCode + shift) % 26) + charDistance)); } else { cleartext += cipher[i]; } } console.log(shift, charDistance, cleartext);
The plaintext for this one was http://twitterhai.tech/hai_there.html
.
http://twitterhai.tech/hai_there.html
just told that you should follow the Twitter user twitt3rhai
(Twitter shark).
The latest tweet was the text "#justdoit, or make your ROBOTS do it. Transfer teXt to an ediTor". I suppose this was a double hint; another trip to robots.txt
and transferring the rest of the tweets to a text editor.
The robots.txt
pointed to the path /min_hemmelige_mappe/
("my secret folder").
There were 65 other tweets containing shark and fish related words. It looked like some kind of cipher. The best thing to do was to just copy all the tweets into a text editor:
Tigerhai HvithaiHammerhaiHvalhai Oksehai Domenehai Brugde Brugde DomenehaiTigerhaiHvithai Hvalhai Hammerhai Oksehai HammerhaiJaws Fish Hvalhai Oksehai Domenehai Tigerhai HvithaiHaiene Tail Tigerhai DomenehaiBrugde Hammerhai HรฅbrannBrugde Fins HammerhaiBrugde HvithaiHvalhaiJaws HvithaiHammer Domenehai Tigerhai Oksehai Hvalhai DomenehaiJaws Mako HรฅbrannHvithai HvalhaiTigerhaiMako BrugdeOksehai Apex Tigerhai HvalhaiDomenehaiHammerhai HvithaiBrugde Jaws HammerhaiOksehai BrugdeTigerhaiHai Brugde HvalhaiHvithaiDomenehai Oksehai Tigerhai Hammerhai TigerhaiHvithaiOksehai BrugdeHammerhaiHvalhaiDomenehaiHai HvithaiHvalhai BrugdeOksehaiDomenehai HammerhaiTigerhai HvithaiSjรธen H TigerhaiOksehai DomenehaiHvalhaiBrugde Hammerhaien Hai Domenehai HvalhaiBrugdeHvithaiOksehai HvithaiHai DomenehaiHvalhai Hammerhai Oksehaien HvithaiHai Havet Oksehai TigerhaiHvalhaiDomenehaiApex OksehaiHai Sjรธen HvalhaiBrugde HvithaiHammerhaiBrugde BrugdeJaws Finne Brugde HvithaiHvalhaiTigerhai Haiene HammerhaiDomenehaiBrugdeOksehaiTigerhai HvalhaiHvithaiHai HvalhaiBrugdeTigerhaiHammerhaiHvithai Oksehai Domenehaien Domenehaier OksehaiHvithai HammerhaiHvalhai Hai OksehaiTigerhai HvalhaiDomenehaiHammerhai BrugdeHvithai DomenehaiBrugde TigerhaiHammerhai HvithaiHvalhaiOksehai BrugdeHvithaien HammerhaiDomenehaiHvalhai Tigerhai Mako TigerhaiHvalhai BrugdeDomenehaiHvithai OksehaiHammerhai BrugdeOksehaien HammerhaiHvalhaiDomenehai HvithaiBrugde HvithaiJaws HaiHvalhaiHammerhai BrugdeDomenehai DomenehaiTigerhaiHammerhaiHvithaiBrugdeOksehai HvalhaiHai TigerhaiHvithaiBrugdeHvalhaiOksehai Domenehai Hammerhaien BrugdeTigerhaiHammerhai DomenehaiHvalhai OksehaiHvithai DomenehaiOksehaiSjรธen HammerhaiHvalhaiTigerhaiHรฅbrann Hammerhai HvithaiBrugde OksehaiHvalhai BrugdeTigerhaien HvithaiBrugdeOksehaiHai TigerhaiDomenehai HvalhaiBrugde HvalhaiOksehaiDomenehai TigerhaiBrugde HammerhaiHvithai Tigerhai HvithaiOksehai HammerhaiDomenehaiBrugdeHvalhai HvithaiOksehaiBrugde Hammerhai Tigerhai Domenehai HvalhaiHammerhaiHvithaiBrugdeTigerhai OksehaiDomenehaiHai DomenehaiHvithaiBrugdeTigerhaiHammerhai Hvalhai Oksehaien OksehaiBrugdeBrugde Hvithai DomenehaiTigerhaiApex HvalhaiHvithaiJaws Haier DomenehaiHammerhaiTigerhaien TigerhaiBrugdeOksehaiSjรธen Domenehai HvalhaiHammerhaien DomenehaiHammerhaiHavet BrugdeOksehai HvithaiTigerhai HvalhaiHammerhaiHvithaiHai Brugde Oksehaien Domenehaien DomenehaiBrugdeHai Havet Hammerhai Hvalhai Tigerhaien HvalhaiOksehaiBrugde BrugdeDomenehaiTigerhaiBrugde Hvithai DomenehaiBrugdeHammerhaiTigerhaiHvalhai Oksehaien HvalhaiBrugdeHvithaiDomenehaiOksehai Tigerhai Hammerhaien BrugdeHvithaiBrugde HammerhaiTigerhaiHvalhaiSjรธen Domenehai Tigerhai Sjรธen HvithaiHammerhai Brugde Jaws OksehaiBrugdeHvalhaiHvitha Domenehai Tigerhai Hammerhai TigerhaiHammerhaiHavet HvalhaiHvithai BrugdeDomenehai TigerhaiDomenehaiHammerhai Hvalhai HvithaiOksehaiBrugde HvithaiDomenehaien Finne HammerhaiHvalhaiBrugdeBrugde BrugdeOksehaiBrugden Hammerhai Hvithai Domenehaien HvithaiBrugde TigerhaiOksehaiDomenehaiHammerhai Hvalhaien DomenehaiHammerhaiOksehaiHvithaiHvalhai Brugde Tigerhaien HammerhaiTigerhai DomenehaiHvithaiBrugdeHvalhai HvalhaiBrugdeHvit Haier OksehaiTigerhai Domenehai Hai OksehaiBrugdeHvithaiJaws HavetDomenehai TigerhaiHvalhai DomenehaiHvithaiHรฅbrann Hammerhai Brugde OksehaiHvalhai DomenehaiHvalhaiBrugde OksehaiBrugdeHammerhai Hvithaien DomenehaiTigerhaiApex Hammerhai Hvithai Hvalhai Oksehai OksehaiHammerhaiJaws DomenehaiTigerhaiBrugdeHvalhaiJaws Hvalhai HvithaiTigerhaiDomenehaiBrugdeHammerhai Oksehaien Hรฅbrann TigerhaiOksehaiHvithaiBrugdeHvalhaiHammerhaiHavet
http://twitterhai.tech/min_hemmelige_mappe/
, and the second was the word HAI1337
(shark 1337) hiding in all the tweets.
The mentioned URL http://twitterhai.tech/min_hemmelige_mappe/
was the URL to a directory containing a file called just haimat
(shark food), and it was indeed shark food. Again the file
command proved a quick and simple way of determining the file type:
file haimat
haimat: pcap-ng capture file - version 1.0
A pcap-ng capture file is a file with a packet capture format that contains a "dump" of data packets captured over a network - a file typically seen from the good old packet analyzer Wireshark.
Now, I'm no expert in Wireshark, but I have used it now and then to listen in on network traffic. I used it extensively while building the app for my HDL Buspro smart home - to see how the different components were talking together. Luckily this dump only contained 400+ packets.
I might very well have missed stuff in the dump, but I did find two things. The first was a web browser visiting the root of a server and getting back the following HTML:
<html> <head> <title>Sharky Secret Distributor</title> </head> <body> Her er dataene dine. Passordet har du allerede... <a href="https://blog.roysolberg.com/secret_data.zip">secret_data.zip</a> </body> </html>
The page said Here's your data. You already have the password... and linked to ZIP file.
The second thing of interest was the response to the the request for the actual ZIP file. Wireshark lets you easily save files that been transferred over the wire.
The solution was to extract the ZIP file from the packet capture file.
Trying to unzip the ZIP file it asks for the password for the file insignificant_shark.png
.
As hinted, you were already supposed to have the password. Yes, the password was HAI1337
.
Of course the image was of a shark. The shark had what looked like runes on its side. Looking at a table of a runic alphabet the three runes can be transliterated to l s b
.
What could the letters LSB
mean in context of an image? Least significant bit. That matches the file name of insignificant_shark.png
. Steganography is the practice of concealing a file, message, image, or video within another file, message, image, or video.
There are many online tools for looking for secret messages in images.
If you decoded the image incorrectly the image would reveal a QR code which was encoded with the helpful text You thought we would hide anything of SIGNIFICANCE? Not the LEAST....
http://twitterhai.tech/u_are_th3_winrar.jpg
.
The image at http://twitterhai.tech/u_are_th3_winrar.jpg
has the text "Do we we have a winner?".
The puzzle can't end with a question. What should we be looking for here? The URL says "winrar" and not "winner". The simple image is a stunning 9 MB. It is possible to hide files at the end of images. WinRAR is a file archiver utility for ZIP files and RAR files.
The solution was to e.g. use WinRAR to extract the files gratulerer.txt
and gratulerer.gif
from the "image".
I wouldn't be very surprised if there was another level of puzzles and solutions hidden here somewhere, but who knows. The file gratulerer.txt
(congratulations.txt) contained a greeting telling that all the puzzles have been solved and included a (shark related, of course) code word that could be used as a proof of challenge completion. gratulerer.gif
is GIF from the movie The Great Gatsby.
So does all this prove that you are ready for working as a digital forensics specialist for a national security agency? Of course not in itself. But I suppose it's a good starting baseline for applicants. Solving all the pieces shows a basic understanding of a wide area of topics like knowing a little bit of code, some basic cryptography, a bit networking and have some general smartness. Of course, if you applied on January 8th or later you might expect some extra questions in regards of the solution as someone solved the challenge on Reddit and there's also a write-up on GitHub. ๐ I wanted to wait to publish this until after the application deadline.
Giving a challenge like this also helps spread the word about the job opening, especially when it ends up on so many news paper front pages. I hope PST got some good candidates. :)
]]>At least here in Norway we have long traditions with advent calendars - both as gift calendars for kids and TV shows with one new episode from December 1st until Christmas Eve. In the last couple of decades this tradition has also extended online with businesses having gift calendars where you can typically give away some personal information with a chance to win some prize.
I usually end up answering a few calendars hoping to win something cool, but this year I signed up on quite a few calendars to see if I could find anything interesting security wise. Most of the stuff presented in this post is no big deal. Companies just want as many people as possible to sign up and don't really care about if it's possible to get an advantage. Still I think it's interesting to see how the calendars are built. And then there are the big leaks of personal information.
Circle K Norge runs quite a few gas stations in Norway. They had a "pre-Christmas game" in the second half of November where you could win a "coffee deal" worth 34 USD (299 NOK). With that deal you get a cup that you can refill with any hot liquids on any of their stations throughout 2019.
The game was easy enough; you had to catch as many as possible of the falling Christmas themed items. You had three chances to play every day and there was a top score list where the top 10 single game scores would win the prize.
So, how could one cheat in this game? Well, that seemed pretty easy. The whole game was run in the user's browser and when the game was over the browser posted back the score to the server.
Technically it wasn't all bad news. I mean, the JavaScript was minified, you had to be identified with your phone number and the requests containing the score was signed by the mentioned minified JavaScript. Of course, the concept of a client telling the score kind of breaks all other efforts. The easiest way to cheat here would be to play one round with the web developer tools open, watch the request and then search for related parameters in the source code. Then one could just set a breakpoint before the request was signed, play another round and then when the game was paused, change what was obviously the points, before letting the script run on.
But is it a big deal? Circle K just wants as many as possible to play and as many as possible to know about their coffee deal. Any cups sold/given away will most probably end up in sales that'll give more profit than cost. Just me writing this post gives them some more free advertisement. But I think it's pretty unfair to the hundreds of people who really tried to play their best day after day for a chance to win this cup. They are the ones that are cheated.
December came and the pre-Christmas game closed. The next competition in line for Circle K was a pretty cool "name that tune" type of game where you got more points the quicker you were able to identify and pick the song playing.
The concepts surrounding this game were pretty much the same as the previous one. The game was run in the contestant's browser and then the score was reported back to the server. The way to cheat would be the same as described in the previous game.
There's some mismatch between the invitations to the games, the terms and the in-game text in regards of the actual prizes, but from my understanding the awards were like this: The daily top 10 scores of the game would be awarded the coffee deals (so 240 thermo cups were given away in total), the next 10 best daily scores would get a gift card for a coffee and bun. There's also some talk about a main prize, and my understanding is that that's usually 113 USD (1000 NOK) worth of gas. It's unclear to me if there's a draw and/or if it's related to the total points across all days.
Is it acceptable to cheat in competitions like this? Terms and conditions most often doesn't allow for any kind of fraud, but on the other hand they don't take any precautions trying to stop cheating and let anyone just tell them their score.
There was this web site with another smaller advent calendar with a daily challenge. I happened to surf by their front page early in December when they just had published an article telling about the new leaderboard that they had made. The leaderboard was loaded in an iframe. And going to the root of that website revealed the usernames and e-mails of all 1000+ contestants.
I quickly wrote an e-mail to them and they responded within minutes and took down the whole webapp in question. They told me that the article telling about the leaderboard was published too early by a mistake. Probably the leak didn't last for many minutes. Because of the short duration of the leak and the small amount of e-mail addresses I don't feel very comfortable naming them. (But it's probably a good idea to not expose webapps with personal information to the Internet even during development.)
There were some advent calendars that used a third party system that I have reported several security issues to. We're talking about millions of names, e-mail addresses, phone numbers, and in some cases addresses, names and birthdays of kids, purchase history, national identity numbers and passwords. I will wait until they have fixed everything before doing a write-up or two about them.
The power company Fjordkraft and their subsidiary TrรธndelagKraft had their own advent calendar with a daily prize of 568 USD (5000 NOK) in cash.
What they also had was a good old AngularJS app which had a "flaw" often seen on the web: The JavaScript revealed the path to other parts of the application. And what it revealed was the path to the admin interface used for getting statistics and draw a winner. There was even a frightful function that was called Reset database. The admin UI was so in the open that I have a small hope that it could be a honeypot, but I doubt it.
Luckily there was some sort of code needed to use any of the functionality. And that's really why I didn't bother to report it to them. I just hope they didn't use a simple code word like "santa" or "xmas2018".
For me this is different from the Circle K competitions where you were guaranteed to get a physical prize if you just give a high enough score back. In these other cases you are at max given an advantage when there's a draw in the daily or final lottery.
The florist Mester Grรธnn had one of the calendars giving a little advantage to a web developer. The JSON clearly stated which answer was the correct one, and in many cases that would be a quicker way of finding the answer than googling, or looking at their web site. Of course, it's no big deal.
The supermarket chain Kiwi had an interesting twist technology wise. Also in this case the JSON clearly stated which answer was the correct one. But what's more, it gave the opportunity to just return true
for the answer for a given day so you didn't really have to look at what was the correct answer. And - while I haven't got it confirmed - it looked like you could just fill in the answer for all weeks at once.
The news site for coders kode24.no had an entertaining calendar. Every day it gave a new small puzzle - typically involving some use of the browser's developer tools. While this didn't directly give a big advantage, it let you stay ahead of the game by letting you solve the puzzles for the following days by directly requesting the contents of the puzzle "file" (which was practical if you were short on time some days). A simple Curl command made it all very easy to use:
for i in {20..24}; do echo '\n'$i: curl 'https://kode24-jul2018.herokuapp.com/api/files' \ -H 'Content-Type: application/json' \ -H 'Cookie: id=[valid user id hash]' \ --data-binary '{"path":"\'$i'-DES","fileName":"HINT.TXT"}' sleep 1 done 20: {"content":["To legender, fra Porsgrunn den ene.","Slรฅr etternavnene sine sammen,","og skaper en kode av glede."],"type":"txt","name":"hint.txt","size":256} 21: {"content":["Nรฅr jeg dobler dette tallet,","og plotter det inn der artiklene deres bor,","finner jeg en spillmaskin,","som er dagens kodeord:","35262282,5"],"type":"txt","name":"hint.txt","size":8} 22: {"type":"error","content":"Fant ikke fila di."} 23: {"content":["mine to favorittfolk,","fra min favorittserie.","slรฅ dem sammen,","sรฅ lรธser du kodens mysterie."],"type":"txt","name":"hint.txt","size":256} 24: {"content":["Siste innspurt, du har vรฆrt flittig som bien.","Denne her blir ekstra vrien.","Reisens start, er per e-brev.","Send meg et pling, fรฅ tilbake et stev."],"type":"txt","name":"hint.txt","size":256}
And even though you couldn't actually answer the puzzle for a future day, it was possible to verify that you had the right solution:
curl 'https://kode24-jul2018.herokuapp.com/api/code' \ -H 'Content-Type: application/json' \ -H 'Cookie: id=[valid user id hash]' \ --data-binary '{"path":"\24-DES","code":"julekos"}' {"type":"txt","content": ["** Passord korrekt: Server er allerede autorisert. **", "Trekningen for denne dagen er over. Gรฅ til dagens konkurransemappe, 22. desember", "** OBS! Du fรฅr kun poeng for รฅ svare pรฅ dagens konkurranse."] }
It's interesting to see the different information the different calendars ask for. What is really needed to draw a winner and/or send any desired information to the end user? E-mail or phone number should be sufficient I suppose. Maybe a (first) name is ok?
The different types of information the calendars asked for were these:
Why on Earth would you ask for the gender and address to be part of a draw? Even worse - user experience wise - was that you in some calendars had to fill in all the information every single day.
A few calendars even forced you to sign up for a newsletter to able to participate in the competition. I must say I liked Vipps' Messenger calendar where they used Microsoft Forms to collect names and e-mail addresses and clearly stated in one line that the information would only be used to contact the winners and that all information would be deleted when the contest is done. It doesn't have to be harder than that.
I did some more observations I wanted to add at the end here.
The alarm company Sector Alarm had a very interesting feature which can't be described as anything but a dark pattern. From Wikipedia: 'A dark pattern is "a user interface that has been carefully crafted to trick users into doing things, such as buying insurance with their purchase or signing up for recurring bills."'
Of all the calendars that I tried out this year Sector Alarm was the only one who had a checkbox with inverted logic. One of the checkboxes you had to take a stand on every day said "I don't want to receive a security alarm offer". While this seems to be formulated to trick people to sign up for something they don't want, they made it worse by suddenly one day rephrase the checkbox to "I want to receive a security alarm offer". So if you were used to tick the box you suddenly had to re-read it and take a new stand on whether to check it or not. I wonder what they do if you in the middle of December say you want to be contacted, but the rest of the month say no.
Intersport's reminder e-mail for the calendar linked to a URL that redirected to a Facebook post which was just a link and post telling that the calendar for that day was opened. Then you had to click that and have a new browser tab opened and go to a Facebook app which again was an iframe to a Fanbooster application which was the actual calendar. Were they trying to make the user experience worse on purpose?
Intersport's not-so-good user experience went from a little annoying to bad when they a) forced you to fill out a lot of fields every day (while most calendars would use a cookie to remember the little info they wanted you to fill in), and b) every single day make you search for and select your favourite store from the 100+ elements drop down.
(Gotta love their name btw. Using an apostrophe in their name (') means that most people will spell/type their name incorrectly. And guess what, they even do it themselves. In their e-mails they use acute accent (ยด).)
Skoringen had a special twist on their calendar. They had a scratch calendar where you got X tickets for the lottery, and then they typically had a simple game you could try. When you finally managed to win the game you got this message with "Unfortunately, one cannot win every time". I don't know what it was, but you had this feeling for 2 seconds where you were satisfied and happy for winning the game, but then they just finished you off by saying you didn't win.
I had this case with Mester Grรธnn's calendar where it suddenly said "Your session expired. For your own security you need to refresh the browser window." Really? For a calendar? If you can just refresh the browser to continue, I'm sure you could've have solved this in a better way. ๐
The developer in me was fascinated by Fjordkraft's API endpoint /api/calendar/isbeforedecember
which returned the HTTP status code 200 OK
before December and then 410 Gone
in December. ๐
These were my observations of testing out quite a few online advent calendars. It was pretty much as expected; mostly ok, and then a few ways to cheat your way to prizes and some small and big leaks of personal information. Stay tuned for details on the biggest leak of them all.
]]>PostNord put all customers (name, phone and address) into a search database that was publicly available (known API key and referer). The database was used for easy selection of name and address when returning items. It was possible to opt out (mark address as secret). The customers had to register on the PostNord site, but was not told that the address would be publicly available. Previously PostNord had a dark UX pattern that tricked users into registering when they really wanted to track their package.
Who: | PostNord. Also reported to NetOnNet. |
Severity level: | Low |
Reported: | September 2018 |
Reception and handling: | Good |
Status: | Fixed |
Reward: | No reward. Thanks for the feedback. |
Issue: | The return package to web shop had a inline search field where you could search for name, address and phone for registered PostNord customers. Easy to download the whole database. |
A friend of mine - Thomas Kalve - sent me a tip about a interesting fuzzy search when trying to return a package to the online shop NetOnNet. When entering a phone number it was searching for a match and was also showing close matches. Not knowing that he had signed up for this search service, he sent me a tip.
The "return to web shop" page was backed by a search database containing 885,086 customer records. It used the search service Algolia which is a "search as a service" (SaaS) that is mainly branded towards product search. PostNord had added all customers to this search engine and made them searchable. It was possible opt out of this use of personal data.
PostNord told me they had 1.1 million users. I found 885,086 customers in the database. So around 200,000 had opted out.
The main issue with this database was that it was open and that PostNord did not declare this use and exposure of personal data. According to GDPR one should declare all uses of personal data. One should also make sure that the customer understands what he or she is accepting.
The form where you could say that an address is secret ("hemmelig adresse"), meaning opt out of the search database:
The easiest way to use this leakage was to search for people in the form. One would need a return code (e.g. "netonnet") and knowledge where to find this page. Search could then be done in Chrome or another browser. If you had ha partial name or partial phone number, the search engine would help with the rest.
If you wanted a copy of the database, you could download the data over time. According to PostNord the service was protected by the following Algolia security features: HTTP referer restriction, rate limit and number of records retrieved limit. The first is, of course, super easy to spoof. The settings they used for the IP rate limit and number of records retrieved per result is not known to me. So how long this would take is also unknown. I did not test any limits on this system. Using my browser and Curl I would guess I saw around 100-200 customer records.
After doing a "copy to curl" in Chrome, I got a working and reproducible Curl command that I could run from a terminal. Stripping away unneeded headers I saw that the Referer
header was required. I did not strip the query or change the JSON data.
curl 'https://swstrzr7ig-3.algolianet.com/1/indexes/address_books/query?x-algolia-agent=Algolia%20for%20vanilla%20JavaScript%203.28.0&x-algolia-application-id=SWSTRZR7IG&x-algolia-api-key=35b1d443b661ed9e65aa4e6c439030f1' \ -H 'Referer: https://my.postnord.no/return/show' \ --data '{"params":"query=45442095&hitsPerPage=5"}'
The request the following data (personal information has been changed):
{ "hits":[ { "id":775007, "name":"John Johnsen", "mobile":"45445095", "street_name":"John street 3", "additional_street_name":null, "city_name":"John town", "postal_zone":"4000", "country":"NO", "external_id":"808007", "source":"contacts", "user_id":1378007, "created_at":"2018-07-20 10:03:39", "updated_at":"2018-07-20 10:03:39", "objectID":"775007", "_highlightResult":{ "name":{ "value":"John Johnsen", "matchLevel":"none", "matchedWords":[ ] }, "mobile":{ "value":"45445095", "matchLevel":"full", "fullyHighlighted":true, "matchedWords":[ "45442095" ] }, "street_name":{ "value":"John street 3", "matchLevel":"none", "matchedWords":[ ] }, // (.... More highlights without match ....) } }, {...}, {...}, {...}, {...} ], "nbHits":270, "page":0, "nbPages":54, "hitsPerPage":5, "processingTimeMS":8, "exhaustiveNbHits":true, "query":"45442095", "params":"query=45442095&hitsPerPage=5" }
The documentation for the Algolia API is available online. By using the API I could see a other indices and tried a query without parameters. The query returned 30 results. The first results were created 2018-04-13 14:35:54
. The nbHits
was 885086
, meaning there were 885,086 customer records in the database. Querying for indices I got the same number of records but a different created date. I'm guessing this system was set up between November 2017 and April 2018.
{ "items":[ { "name":"address_books", "createdAt":"2017-11-16T08:23:33.157Z", "updatedAt":"2018-09-19T19:23:17.094Z", "entries":885093, "dataSize":225565146, "fileSize":691313817, "lastBuildTimeS":87, "numberOfPendingTasks":1, "pendingTask":true }, {"name":"biggest_decline", "createdAt":"2017-12-15T10:32:32.897Z", "updatedAt":"2018-09-19T19:23:35.868Z", (...)}, {"name":"biggest_incline", "createdAt":"2017-12-15T10:32:32.897Z", "updatedAt":"2018-09-19T19:23:35.868Z", (...)}, {"name":"popular", "createdAt":"2017-12-15T10:32:32.897Z", "updatedAt":"2018-09-19T19:23:35.868Z", (...)}, {"name":"price_amount_asc", "createdAt":"2017-12-15T10:32:32.897Z", "updatedAt":"2018-09-19T19:25:15.172Z", (...)}, {"name":"price_amount_desc", "createdAt":"2017-12-15T10:32:32.897Z", "updatedAt":"2018-09-19T19:25:15.172Z", (...)}, {"name":"products", "createdAt":"2017-12-15T10:32:32.897Z", "updatedAt":"2018-09-19T19:23:35.868Z", (...)}, {"name":"rating_desc", "createdAt":"2017-12-15T10:32:32.897Z", "updatedAt":"2018-09-19T19:23:35.868Z", (...)}, {"name":"review_score_desc", "createdAt":"2017-12-15T10:32:32.897Z", "updatedAt":"2018-09-19T19:23:35.868Z", (...)} ], "nbPages":1 }
I do not have screenshot proof of this, but a previous version of the "track package" page (which was leaking personal data back in May 2018) applied a dark UX pattern where the user was shown a registration form instead before the tracking information. A link to this page was sent in an SMS to the recipient of a package. There was a small skip registration link or button (E.g. "No thanks, I just want to see where my package is").
A user would typically do like this:
When I saw this last, I did the following:
This is was not a security issue the way Roy and I normally find and write about. The issue here was mainly that the customers did not accept this. PostNord did not have consent to make the customer records available for download. Customers did accept some terms and conditions and PostNord did have a privacy policy. None of them said anything about putting the records online.
The use of the search API was intentionally as part of their solution. It did not have any security issues in the normal sense, but this usage with personal data constitutes a leak of personal information.
As last time the issue was quickly resolved.
I reported the issue to NetOnNet and PostNord. A copy was also sent to The Norwegian Data Protection Authority (DPA, Datatilsynet). It was reported to NetOnNet since the return code used to access the page was from them. The report e-mail was title as "Whole NetOnNet customer database available" as that was the first assumption and that it was something that would get attention. It later showed that it was not their customer database, but a separate PostNord database.
I talked to PostNord on the phone and got an e-mail summarizing the handling during following day. According to PostNord, the service was temporality taken down at 08:50. Taking the service down was the right response. My friend confirmed that the service was offline at 10:13.
PostNord will give the Data Protection Authority their version of the leak as part of the mandatory deviation notification.
]]>I got in contact with a parent concerned about the security in a new messaging app used for communication between pupils, teachers and parents (Norwegian link) in the schools of Oslo. The app - which is called Skolemelding - was released this summer and is developed by CGI Norge. Unfortunately the concerns turned out to be justified. Please note that I was not the one to discover any of these issues.
The Norwegian newspaper Aftenposten broke the news about the vulnerabilities September 6th (follow-up article) and the system was temporarily shut down.
The security vulnerabilities were really bad. The system is designed for sending messages between school and home, including messages related to student absence. According to Norwegian law health information is regarded as sensitive personal information, and therefore I would assume that the system should be designed with the appropriate security level to prevent unauthorized access.
Anyone with either a valid login or anyone who got hold of a guardian's Social Security number could access any and all messages across all 63,000 pupils + guardians and teachers. This includes not only communication between guardians and school, but also communication between teachers was accessible through these vulnerabilities.
Anyone who got hold of a guardian's Social Security number could access details about a pupil's family (full names, Social Security Numbers, usernames, e-mail addresses, phone numbers) (in addition to the guardian's messages). I have published a couple of cases where one easily could get a specific person's Social Security number.
There are two apps - one for parents and one for teachers + students. Looking at the Android apps they are pretty much the same app with just different build variant (flavors). Parents use a different login than the teachers and students.
The apps are built using React Native with all the functionality bundled in a JavaScript file. Seemingly you get the same functionality by logging in on the web as in the apps.
To fetch a message the app calls /api/message/GetMessageWithId?messageId=[ID1]&threadId=[ID2]&isReplyAllowed=[true|false]&onBehalfOf=[pupil's username]
.
This is where we find the first failure. The server did not do an authorization check on the messageId
and therefore let one read any other message in the system. And to make this a huge problem; the IDs were a sequential number that let one go through all messages available.
The parent/guardian version of the app uses a common log-in solution by the Agency for Public Management and eGovernment (Difi) called ID-porten. The teacher + student version of the app uses a log-in solution called Feide - a pretty common solution among schools. It's important to emphasize that there were no weaknesses in these two solutions in this case. However, the usage and implementation in the apps was horrible.
In the parent version of the app, when pressing the Log in button, the user is taken through the log-in flow of ID-porten as expected. However, there was a big flaw in the logic of the authentication server (called midporten) used by this system. The "token" returned from that server was just the user ID - the parent's Social Security number. One could intercept the call at this stage and replace the user ID/SSN with another one and get full access.
Reading the last sentence you might ask what the client really did with the user ID. Well, the client sent the user's SSN to the school portal (the app's API) to generate an access token. This means that it was possible to just skip the whole flow of midporten and ID-porten and just ask the app's backend to get an access token for any valid user ID. Wow.
So the flow was like this (with optional faking of user ID in either step 5 or 6):
While not a direct flaw, it is interesting to look at the structure of the data from the server. There's a call going to /api/settings
which returns information about the logged in user and the belonging student(s). The JSON returned is actually LDAP data. And it contains full names, usernames, e-mails, Social Security numbers, possibly phone numbers of guardians and belonging student(s). It looks like a dump of the LDAP. Why would one need this information in the app? And why are Social Security numbers stored in a way like this in a directory where an app has access? We're talking about hundreds - maybe thousands - of lines of LDAP data.
The fix for the login vulnerability was - instead of sending the user ID to generate the access token - to send a JSON Web Token (JWT). The JWT does not contain any user information, meaning that it has be checked on the server side to see which user is asking for an access token.
What is surprising is that the JWT does not contain some internal user ID (like a UUID), but rather a timestamp. And the JWT cannot actually be verified on the backend (other than string comparison) as it is sent back as lower-case.
What happens if two users log in at the exact same time? Is it possible that the timestamps (and therefore JWT) can end up being the same? Would one user get access to the other?
With these vulnerabilities you can start to wonder if there are other problems here. I have not tested any of these things, but it is something for the vendor to look into.
It's easy to make mistakes when programming, and not all programmers will have a good enough understanding of security (though I think we should try to improve this!). What really sticks out here is how both CGI Norge and the City of Oslo could release a product like this without testing the security. I do not believe this is a case where a pen tester overlooked something. This must be a case where there has been no external testing of the system. And that's a shame.
]]>An app that was supposed to contain only addresses and a garbage collection calendar was actually using services that was leaking personal information like names and Social Security numbers for many, many persons.
Who: | Norconsult Informasjonssystemer (Nois) |
Severity level: | Low to medium |
Reported: | April 2018 |
Reception and handling: | Response time good, but slow on fixing. Notification to clients was faulty. |
Status: | Fixed |
Reward: | A thank you |
Issue: | Information leak with personal data and usage data for up to 625,000 people. Data also contained waste disposal routes, sewage monitoring, and more. Likely that modification of some data was possible. |
A friend came up with an idea of having an Alexa skill/Google Assistant app where one could ask for "when will the paper garbage be picked up"? He saw that the website for BIR - our local waste management company - did not provide an easy way to fetch that data. He said there also was an app with the same data. Taking a look at the communication between the app and the server it quickly became clear that something was very wrong in regards of security.
Using the HTTP proxy application Charles it was easy to look at the traffic between the app and server. Surprisingly the app and server actually used SOAP for communication. SOAP is pretty painful to work with and not commonly observed in the world of apps (though it was used for the case with Tryg and Infotorg).
All the SOAP calls used the same simple username and password and all data was transferred unencrypted over HTTP.
Surprisingly, doing a search for a street name actually returned a list of properties including the full name of the owner.
Then, when selecting a property in the app there was a call getting more details about it. Even though the app did not show the data, the server response returned full name, address and Social Security number of the property owner.
SOAP and related concepts |
SOAP is a XML based messaging protocol. It can be used over HTTP, as in this case, or other protocols like JMS and SMTP. A SOAP server contains a lot over SOAP services. These are described in a WSDL file. With the WSDL (and corresponding XSDs) one can generate strongly typed classes for all endpoints that are easy to use. The WSDL/XSD files can contain comments that describe the parameters and possible values. This was not the case here. Using SoapUI or similar tool, one can easily execute different parameters for each service. SoapUI is like Swagger or Postman for REST. |
Using the username and password that was found in the app, we could do queries to the SOAP service. As we did not know the input values, some services was hard to use.
Nois has published quite a few apps, and one of the calls that the app made was one to get the configuration of these apps. Looking at the root of this URL revealed a public facing status page with URLs to many of Nois' web services. This page was even indexed by Google. The URLs gave out WSDL files for all services available.
Based on the copyright statements on the page and source code it looked like the system overview page was last maintained in 2012.
After dedusting SoapUI, we explored some of the "GetXYZ
" services. We were able to get successful response on a subset of the once we tried. Main reason for failed requests was lack of examples of the input parameters. We did not try any write operations.
GetRecycleStations
(BIR AS)GetKommune
(GetMunicipalities
) - EnvinaGetWaterGaugePoints
- Sandnes kommuneSearchPropertiesWithPrincipalsAndZipCodes
- BIR ASBased on the list of SOAP services available, we figure the following data is available within the systems (given that they use that part of the system):
We created a list of servers we found on the system overview page. We also manage to find some using search engine (Googling for phrases from the application). From the overview page we identified some of the owners and it was also clear that some servers were offline.
To exclude those that were not affected by this huge security flaw, we verified each and every server. Mainly two SOAP services were called, one that returned all municipalities on that server and one with the latest changes in customer data. The service with customer data revealed that our user (nois/nois
) had access to customer data on that particular server AND that the server was currently being used (somebody edited some customer in the past week/month or so).
The servers were verified manually using SoapUI and we took screenshots as proof. The screenshots were edited to censor any personal data (we don't want to have that saved).
A few servers on the overview page did not work even if they showed status โgreen". One example of this is Gjesdal kommune. According to the overview page, the service was running but we could not reach it. This could be that they had firewalls that blocked our HTTP requests.
The owners were either a municipality or a cooperation of municipalities (Norwegian: IKS - interkommunalt selskap).
Municipality / company | Municipalities | Inhabitants | Screenshot |
---|---|---|---|
Avfall Sรธr AS | Kristiansand, Songdalen, Sรธgne, Vennesla | 120,403 Customer estimate: 45-65,000 | |
Remiks | Tromsรธ, Karlsรธy | 76,814 Customer estimate: 30-40,000 | |
Regiondata | Dovre, Lesja, Sel, Vรฅgรฅ | 14,820 Customer estimate: 5-8,000 | |
Haugaland Interkommunale Miljรธverk (HIM) | Haugesund, Bokn, Tysvรฆr, Vindafjord, Etne | 62,026 Customers: 33,052 | |
Shmil | Hemnes
|
4,524 Customer estimate: 2,000 | |
Retura Val-Hall AS / Hallingdal renovasjon / Valdres Kommunale renovasjon | Hol, ร
l, Gol, Hemsedal, Nesbyen, Flรฅ, Krรธdsherad, Nord-Aurdal, Sรธr-Aurdal, รystre Slidre, Vestre Slidre, Etnedal, Vang
|
40,915 Customer estimate: 15-22,000 | |
BIR AS | Askรธy, Bergen, Fusa, Kvam, Os, Osterรธy, Samnanger, Sund, Vaksdal | 359,364 Customer estimate: 140-190,000 | |
Innherred Renovasjon | Selbu, Malvik, Merรฅker, Stjรธrdal, Frosta, Levanger, Verdal, Inderรธy, Leksvik
|
92,563 Customers: 35,671 | |
Hamos Forvaltning IKS | Hemne, Agdenes, Meldal, Orkdal, Snillfjord, Skaun, Rindal, Hitra, Frรธya, Rennebu, Surnadal | 50,967 Customer estimate: 20-27,000 | |
Stavanger kommune (own server) | Stavanger
|
132,729 Customer estimate: 50-70,000 | |
Sandnes kommune (own server) | Sandnes | 76,328 Customer estimate: 30-40,000 | |
Nordfjord Miljรธverk IKS (NoMil) | Bremanger, Vรฅgsรธy, Selje, Eid, Hornindal, Gloppen, Stryn | 32,932 Customer estimate: 12-18,000 | |
Envina IKS | Klรฆbu, Melhus, Midtre Gauldal
|
28,582 Customer estimate: 10-15,000 | |
Movar (Mosseregionen Vann, Avlรธp og Renovasjon) | Moss, Rygge, Rรฅde, Vestby, Vรฅler | 76,391 Customer estimate: 30-40,000 |
It was hard to find good numbers of the amount of customers that are in the databases of the different municipalities. Innherred Renovasjon and Haugaland Interkommunale Miljรธverk did have numbers on customers (Innherred)/waste disposal customers (HIM). They were respectively 2.6 inhabitants per customer and 1.87 inhabitant per customer. Based on those numbers we estimate 450,000 to 625,000 private persons possibly exposed with full name, address, Social Security number, contact information, etc.
Some ZIP codes from a GetEiendom
request to the server Regiondata
:
This is not a complete list. We manually picked out some from the response in SoapUI to get a feeling for the data set.
Municipalities listed, but with a different service exposed:
We did not have a username/password that worked for this service. It doesn't seem wise to have these on the Internet. They should all add a firewall blocking access.
Municipalities listed, where our requests did not work:
We believe we shouldn't have been able to reach the servers of these municipalities. It doesn't seem wise to have these on the Internet. They probably should all add a firewall blocking access.
Municipalities listed where firewall blocked our request:
We consider all these secure.
Sunday night we sent an e-mail to the director of Nois and the head of department of the department responsible for the software in question.
A few hours later we got a pretty cold "We confirm that the e-mail has been received." back.
Just before midnight we sent an e-mail with some questions and information about another 7 municipalities affected by the issue.
Not having heard back we sent a new e-mail asking for a status.
We soon got a response telling that they had established a working group for the issue. They had fixed a couple of the issues and they had informed the The Norwegian Data Protection Authority (Datatilsynet) and the municipalities in question.
We used the freedom of information law to get a hold of the alerts and communication with the municipalities. The alert was a letter sent on day 5.
In addition to general information, this is was what the letter told:
"It has been discovered that it is technically possible for 3rd party to extract information from this service if you have the deep technical insight required. It is important to emphasize that this has not happened, but that we have implemented preventive measures."
They also said a new version would be rolled out during the next days.
We saw that the issue still was open for many of the municipalities and asked for an ETA of a fix.
Tuesday morning Nois said they expected everything to be OK by the end of the week.
Friday night we asked if really was the case that everything was fixed.
We got a reply that they would work through the weekend to get it fixed.
We informed that not everything was fixed yet. We sent an example query that returned a list of 1208 customers that had changed the last couple of months. The details returned from server included full name, social security number, property identifier and lots of other fields.
Nois told they were working on getting access to the necessary serves to fix it.
In the afternoon we got another response telling us that everything should be OK and they thanked us for informing them about it.
We replied that we still were getting personal data from one of their services.
They looked into the issue and they thought it could have been a service being restarted by an automatic scheduled job...
We got another e-mail telling us that they had even more services automatically restarting during the night.
We noticed that a service was back up running, and it returned information about more than 6,000 persons.
The day after, a Sunday, we got a response that they would contact the customer the day after to make sure it was fixed.
We got another response that the service in question now was removed.
We reported the incident to The Norwegian Data Protection Authority (DPA). After Nois gave their version of it, the DPA closed the case and said they were satisfied by the countermeasures implemented by Nois. We believe the report sent from Nois to the DPA to be inaccurate.
Nois claimed that it was not possible to get lists (supposedly one would have to ask for one at the time) of Social Security numbers or of properties that had not made their payments. Several services returned lists of SSN/customers/properties. (Property addresses are anyways public knowledge and one could easily just as for each property, one by one.)
Nois claimed that the issue was caused by a "technical failure". Not using encryption on any of their services, using both username and password "nois
" and returning Social Security numbers when asking for which days the garbage is picked up is not a technical failure. The cause is ignorance by developers and lack of knowledge by any managers above them. Quite a few people must have been aware of this.
Nois claimed that the "deviation" was open from January 31st 2018 until April 16th 2018. The issue was not closed until the later part of June 2018 and start of August 2018 - and not at all when the report to the DPA was sent. While we discovered this mid April, Nois published BIR's "garbage collection calendar" app in August 2017. Looking at that exact version of the app reveals that the same services without encryption, with the same login and the fields for Social Security numbers, names and addresses were being used. Looking at Innherred Renovasjon's app we found the same to be true in a version released in October 2015. We don't understand why they would specify January 31st 2018 as the start date. If they will want to claim this, they should provide some evidence for the date being January 31st.
Nois claims that no personal information was leaked when looking at the logs. This claim is a bit hard to understand as Nois did not have operational responsibility of many of the servers and they have had a hard time getting access to the servers to do updates. At the time of the report they had not closed the issue on all servers. Also, as mentioned, the issue was seemingly open for much longer than they claim. And finally, the services were indeed leaking personal information for every request being made. It's impossible to know if a request came from the "garbage collection calendar" apps or from someone faking it and looking up a person's Social Security number. And for how long do they have logs for?
During the period from we notified the vendor to the time it was fixed, we monitored the servers from time to time using a small script. The monitoring was done by calling one of the SOAP services. We choose the GetMunicipalities
(Norwegian: GetKommune
) as this did not contain the personal information, required authentication and returned some data.
The smilies and colors below indicate if the service responded or not. There are two versions of this data set, one with just the days we tested and one with empty cells for days we did not test.
We sent this security issue out to the local media since there was 14 different corporations/municipalities involved. The main take away from it is that information security is higher on the agenda for a lot of these. We also got to know that the corporations (customers of Nois) did not view this as a security issue/breach of personal data. Norwegian headlines translated below.
digi.no wrote a summary on a national level: "Renovation companies in many Norwegian municipalities leak personal data"
No media reported on these:
What is clear is that the municipality is responsible for the protection of the personal data they have. They are responsible for their database. It seems like Nois have full access to the system for upgrades and checking logs. We still wonder about who is responsible for the running of the systems. Who is responsible for not changing default configuration (default username and password with full access)? Who is responsible for not configuring the firewalls? Who is responsible for building apps on top of SOAP and using a user with full access for the communications?
After the media asked the municipalities responsible for the personal data and after we got to see the letter from Nois notifying their customer about the security issue, most of them did not react to this as a system leaking information. Why did Nois send such a vague letter to the customers and DPA (Datatilsynet)?
Nois claimed they checked the logs and that no data was actually leaked. They also claim the security issue was just present for 4 months. How was the access logs checked? Did they check back in time to at least 2015? Where is the report to their customers about this?
Who checks the security of an app that doesn't contain a login or any personal data at all? No one, because it just doesn't make sense to do that. It was just a coincidence that we discovered this one.
What sticks out in the end for us is that the reception and handling of the issue wasn't very good. Usually IT companies responds more quickly and are more open about the issue and handling of it. Here the customers got a vague description not telling much about what was going on.
It would be interesting to know if anyone really digged into how long the data was available for, because it seems like they have been there for quite a few years.
Finally, if you as a developer are told to use a service that returns much more data than what's intended to be used, you should speak up. Quite a few people have known about these personal data being transferred unencrypted over the wire.
]]>Just hours after I warned Thomas Cook Airlines about a massive leak of flight data, the Norwegian newspaper VG reported about a person who accidentally got logged in as a booking agent on a travel agency's web site. What surprised me was that the travel agency wasn't notified first and that the newspaper published the article without giving any chance to fix the issue.
The travel agency with the affected system was the Norwegian Amisol (ex Pyramidene Reiser). They were using a booking system called TravelBook from a Swedish company called adb utveckling.
The security issue here was anyone on the Internet could log in as the booking agent for the travel agency:
The person discovering the security vulnerability was no IT person and I can kind of see how he decided to notify the newspaper after doing such a random discovery through Google. However, I cannot understand how the journalist or editor would publish the article the same day as they got the tip about the issue. The newspaper interviewed the CEO of Amisol, but they did not give them or the system vendor any time to actually look at the issue or left alone fix it.
Why is this bad? Well, I read the news article shortly after it was published and I just googled for the term "amisol booking" and found the link with the agent login. I could log in and see all the details about all the travels from at least 2013 to future ones. It would take minutes to make a script that could download all the personal data I listed and anyone could go in and do their best to ruin a future vacation.
In Norway the press have to follow the Ethical Code of Practice for the Press. I don't know if this is a breach of the publication rule 4.3 that says "Always respect a personโs character and identity, privacy, etnicity, nationality and belief." But none the less, VG did indeed risk thousands of people's privacy when giving anyone the description on how to find all these personal data. I don't think that was the right thing to do.
Skip this part ๐
I wanted to add the technical details of this issue and what should have been in place to avoid it.
In 2018 you cannot create a system like this that does not use https. If your old legacy system is still using http you need to upgrade. And remember to check two things:
https
link to http
it does a redirect to https
instead of returning any content over http
.If you are in doubt if you site needs https
or why it's a good idea in general, please check out Troy Hunt's great article and video on the subject. Also, from Chrome 68 Google now labels http sites as "not secure".
It's also a good idea to use HTTP Strict Transport Security (HSTS) to protect against protocol downgrade attacks.
This is what gave away the Amisol security vulnerabilities. Because the username and password was sent as GET
(and with no redirects) instead of POST
- and Google somehow got hold of the URL - it was indexed and available through search. Just in case it's worth testing that even though your login form is using POST
that it does not accept the credentials in a GET
request.
I suppose some think of XSS as harmless fun, but I think that's a harmful view. For this site it was - and still is - possible to endanger both administrators and customers. A successful attack would make it possible to steal usernames, passwords, session cookies, personal data, and also to alter and delete data.
While some might argue that hiding stuff is security by obscurity, it's a good practice to not let your administrator pages show up in search. It's not what anyone (except for bad guys) want to find when searching for something related to your service. Just remember to not specify direct URLs to your admin stuff in robots.txt
or else it just makes it easier to try to find any ways into your system.
Looking at the server's response header it responded with Server: Microsoft-IIS/6.0
. If that's right that's pretty crazy. That is software from 2003 - 15 years ago. The list of bugs in the CVE security vulnerability database is pretty long, and I would expect that the belonging Operating System, frameworks and libraries to be of the same age.
The challenge with IT security is that the good guys need to find all vulnerabilities, while the bad guys only need to find one. Security is not easy, or everybody would get it right. Instead, we all get it wrong at some point to some degree. IT security is not a state, it's a process. Any sort of audit would surely discover the flaws in this case.
I've written about security.txt
before. I would recommend everybody to include that file on their site. It can be so hard to find the right (or any) contact point at a web site. Often you have to really push through to get a customer support to deliver the right message to the right persons.
It was possible to - partially very systematically - retrieve passenger information for travelers with Thomas Cook Airlines. Data as old as from 2013 and all the way to 2019 was available.
Who: | Thomas Cook Airlines (and especially travelers booking via Ving) |
Severity level: | Medium to high |
Reported: | June 2018 |
Reception and handling: | Fair |
Status: | Fixed |
Reward: | A thank you from Thomas Cook Airlines, a "thank you, but there are no issues" from Ving |
Issue: | Information leak with personal and travel information from at least 2013 including future travels |
I had just paid for my vacation with Ving when I got an e-mail from Thomas Cook Airlines with a link to airshoppen.com where you can upgrade your flight (i.e. meals, seating, etc), buy duty free and do the online check-in. I got a bit curious when I saw that the links from the e-mail did an auto login of my user based on only very little data.
Skip this part ๐
The links from all the e-mails from Thomas Cook Airlines to their domain airshoppen.com were click registration URLs also containing a redirect URL. The redirect URL was of the format
hxxps://no.airshoppen.com/autologin?ReturnUrl=/oppgrader-flyreisen-din&bookingNo=<integer booking number>&tourOperatorTag=<short name tour operator>&depDate=<departure date>&<some UTM parameters>
That link did another redirect which made you end up as logged in and you could see some of your personal info. Of course, the only natural thing to do was to play with the parameters; change the dates, change the booking number and remove one or more parameters.
The first thing I discovered was that I could log in to my own booking using +-1 day in the URL. I'm sure this is done to avoid problems with timezones when people log in and pick their travel date manually. However, this means that guessing on a close by booking number one only needs 120 guesses to cover a full year of travel dates. To me this is not very assuring.
When you are logged in you are presented with a list of the names of everyone travelling on the same booking. You can then select who any orders will be registered on. Copying that call to curl would make it look something like this:
curl 'https://no.airshoppen.com/Account/SelectPassenger?locale=nb-NO' \ -H 'Cookie: ASP.NET_SessionId=<session ID>; <lotsa cookies ๐ช>; .AIRSHOPPENAUTH=<some auth token>' \ -H 'Origin: https://no.airshoppen.com' \ -H 'Accept-Encoding: gzip, deflate, br' \ -H 'Accept-Language: nb-NO,nb;q=0.9,no;q=0.8,nn;q=0.7,en-US;q=0.6,en;q=0.5' \ -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.183 Safari/537.36 Vivaldi/1.96.1147.42' \ -H 'Content-Type: application/x-www-form-urlencoded; charset=UTF-8' \ -H 'Accept: */*' \ -H 'Referer: https://no.airshoppen.com/autologin?ReturnUrl=/oppgrader-flyreisen-din&bookingNo=<integer booking number>&tourOperatorTag=<short name tour operator>&depDate=<departure date>&<some UTM parameters>' \ -H 'X-Requested-With: XMLHttpRequest' \ -H 'Connection: keep-alive' \ --data 'TravelDocumentsRequired=False &Email=<e-mail address> &ShowCanContact=False &IsPersistant=false &SignInModel.SelectedTourOperatorTag=<short name tour operator> &SignInModel.DepartureDate.Day=<departure day> &SignInModel.DepartureDate.Month=<departure month> &SignInModel.DepartureDate.Year=<departure year> &SignInModel.SigninMethodType= &SignInModel.BookingNo=<booking number> &SignInModel.ReturnUrl=%2F &SignInModel.HashValue=' \ --compressed
That call returned markup with personal and flight information. However, I quickly saw that this call could be shortened to this:
curl 'https://no.airshoppen.com/Account/SelectPassenger' \ --data 'SignInModel.SelectedTourOperatorTag=<short name tour operator> &SignInModel.BookingNo=<booking number>'
Now, this was an issue. Using only the three letter short name for Ving and booking information the server would return the same data about the booking.
As this was a POST call I would guess there are no logs that really can tell if this vulnerability has been misused by anyone.
For Ving this was pretty serious as they use a booking number that is seemingly an incremental integer, which makes it possible to iterate through all bookings. It's worth noting that not all Ving travels are using Thomas Cook Airlines, but quite a few of their 400,000 yearly travelers do.
For Ving the oldest bookings I saw were from 2013, and the most recent one from 2019. I suppose this means that data was leaking about at least tens of thousands of travels.
I asked friends and family for booking numbers to test with, and even found some more on Google. It was possible - using only the booking number - to get the data for the travels from the travel companies Ving Norway, Ving Sweden, Spies Denmark and Apollo Norway.
Luckily Apollo doesn't have that easily guessable booking numbers, so I couldn't find any other bookings from the one number I had. I never got any data from the travel company TUI (ex Star Tour), but I didn't have any recent or future booking numbers. (TUI do however seem like the company most concerned and in control in regards of GDPR, so maybe they don't have any old data available in Thomas Cook Airlines' systems?)
Other than that it's only speculations, but airshoppen.com do handle many, many travel companies from Norway, Sweden, Denmark, Finland, United Kingdom and Germany. I would expect at least some of them to be vulnerable through this leak. airshoppen.com also serves travelers travelling with Condor, Atlantic Airways and Small Planet Airlines, but please note that I never saw any actual data for these three airlines.
It was seemingly possible to iterate Ving's bookings with Thomas Cook Airlines from 2013 to 2019. At least data from Ving Norway, Ving Sweden, Spies Denmark and Apollo Norway was affected by this vulnerability.
I really struggled to find a proper contact point for airshoppen.com, but I submitted a web form with the closest possible topic telling them about the issue. I got an automatic reply by e-mail.
The morning after, not having heard back, I felt that this leak was too big to just slide. I asked Ving on their chat for an e-mail address where I wrote them and gave the issue ticket number from Thomas Cook Airlines.
Ving replied back in just a few hours. They thanked me, told me they would pass it on internally and contact Thomas Cook Airlines. They also told me that I would hear back from Thomas Cook and not Ving.
This was the first time Thomas Cook Airlines replied to my inquiry. And guess what? They just said that they needed my booking number to help me. What the ... I tried to clarify that I was reporting a security hole and they probably wouldn't need my booking number for that.
Ving actually wrote me back and told me Thomas Cook Airlines had looked at the issue. They told me that I couldn't log in without having both the booking number and the departure date. What the ...
I replied that this really was an issue and that data back as far as to 2013 was available.
Ving replied and told me I needed the date and it wasn't as easy as I said. I asked them to give me a booking number so I could prove it. I never heard back from them after that.
Annoyed by their ignorance and unwillingness to fix the issue I contacted a journalist.
Like most days I checked if the issue was still there. It was suddenly fixed. Though I hadn't heard a word back from them.
I got an e-mail from a legal counsel in Thomas Cook Airlines verifying that they had identified and closed the vulnerability. It wasn't the most grateful e-mail I've ever got, but they could tell me that they'll keep working to improve the security and that they take their customers' security very seriously.
I got a phone call from the same legal counsel in Thomas Cook Airlines. She wanted to thank me for informing them about the issue and apologized that it took so long before I heard back from them. They were going to follow GDPR in regards of handling of the incident, but from my understanding they don't think it's worth reporting to the data protection authority.
Well, you can try calling an airline or someone working at an airport and ask them for passenger information. You won't get it. You are not supposed to know that person X is currently or probably will be onboard on flight Y.
Also, some people might not like that you can see who they travelled with on vacation maybe 5 years ago. ("Didn't you say you were going to that job conference in Stockholm? And who is this you were travelling with?")
Another problem with this is how this opens up for spear phishing - to use the information to target and deceive a traveler.
Who really knows how long someone can have taken advantage of this leak..?
As in the case with the leaked hotel reservations I decided to publish this through media first. Yle covered the story a few days ago (Swedish link only, sorry). Yle is is Finland's national public broadcasting company. I'm glad to see that big media companies like Yle cares about online security and our personal data.
Another week, another leak. It's disappointing that so much of my personal information and usage data is available for anyone on the Internet. What I can find in just a couple of minutes on a website is surely just the tip of the iceberg.
While we will continue to see leaks like this I hope that companies will get better at handling and dealing with information about security vulnerabilities. For me the whole process reporting this issue wasn't very pleasant. It could be a coincidence, but shortly after I contacted media the issue was fixed and I finally received a proper response. Only after the phone call with Thomas Cook Airlines on day 19 I was more satisfied with how the whole incident in total was handled. The conversation made it clear to me that they really did take this seriously.
]]>A tracking system was leaking information about all the users, pictures, and information and location data from all the trackers.
Who: | Anonymous, let's call them Acme4 |
Severity level: | Medium |
Reported: | May 2018 |
Reception and handling: | Good |
Status: | Fixed |
Reward: | A thank you |
Issue: | Information leak with pictures, personal information and location data |
Acme4 sells a tracking chip intended to be used for dogs. There's a companion mobile app which is used to see the tracker on the map and send commands to it. It was a bit of a coincident that I noticed this system and took a closer look at it.
I never had physical access to a tracker, so the only entry point for me was the app itself.
I have my guide on how to crack Android apps which I take a quick glance at when doing this stuff. This was easy enough though. I downloaded the APK from apkmonk.com and decompiled it with javadecompilers.com. The end result was a an full access to the source code and resources.
Normally I would probably just have used a HTTP proxy to intercept the traffic, but in this case where I didn't have the necessary hardware (the tracker) I wouldn't be able to use all functions, so I needed the source code to discover all possible HTTP calls. Also the source code sometimes include hidden gems like unused endpoints, test servers and more.
I was a bit surprised by the how clean the code was and how it used modern patterns and libraries. The UI isn't that nice and often I find there to be a correlation..
With the app's source code I could try out the server communication. This wasn't exactly your regular REST API. While the data returned from the server was JSON, all of the calls were GET calls for all kinds of actions with the data in query string parameters - even the authentication. Of course this just make it much easier to play around using a desktop browser.
The search for adding friends had the classic "return everything" when searching for ___
. There's nothing inherently wrong with that, but it isn't ideal, and also the search returned the ID, username, a display name, first name, last name and e-mail address of the users.
I found all the pictures of the users in an open Amazon S3 bucket. Luckily most of the users are dogs. ๐ But still, some owners might upload pictures of themselves with their pet. Surely the owners don't expect the pictures to be lying around on the Internet.
Using the PHP scripts it was possible to iterate through all of the registered trackers as the ID was based on an incremental integer. The combined tracker information I got from the scripts was ID, phone number (to the tracker SIM card), IMEI, display name and historical location data (latitude, longitude, address, direction).
The good news is that I didn't find any direct way of seeing which tracker belonged to which user. However, about 5% could be connected because of the display name of the tracker. Additionally, because of the incremental IDs, it was possible to pretty accurately estimate which tracker belonged to which user.
Night time Friday I sent an e-mail to the support address. I like to keep it short, but this was probably the longest description I've had to write in such an e-mail.
Before lunch on Monday I got an e-mail thanking me for the report.
Again before lunch, I got an e-mail telling me that everything should be fixed. So this was all pretty quickly taken care of.
While doing this write-up I saw - and reported - that the search for adding friends within the app still also returns e-mail addresses and still returns all users if searching for a special character. I hope that'll be fixed. Imagine Facebook giving away all users and their e-mail addresses that easily.
I looked into the financial and other public information of this company. I also checked out social media and did a little general due diligence. Though the person behind the company might not do all coding and support personally, it appeared to be a one-man show. I don't want to use my blog to afflict individuals. As stated before, I want people to know that none of their data is secure, I want us developers to improve our data security skills, and I want companies to take more responsibility around data security and their customer data.
Maybe we as consumers should think twice when buying devices connected to the Internet. Think about what information you hand over to the vendor and what could be the worst case if everything's leaked. Would someone be able to live track you? Would someone be able to know when you're not at home? Would you be ok with anyone having the usage data for this system? Would you be fooled if anyone used this information in a clever way in a phising e-mail?
I wish that there was some sort of certification to know that an IoT vendor at least fulfils some minimum standards in regards of computer security and have regularly third party audits.
]]>The parking company OnePark had a security issue that made it possible to systematically iterate through and change the username and password for all of their customers. By logging in afterwards one could collect personal data and even register your car's licence plate to be paid by that account.
Who: | OnePark |
Severity level: | Medium |
Reported: | November 2017 |
Reception and handling: | Fair |
Status: | Fixed |
Reward: | A thank you |
Issue: | Information leak with personal information |
OnePark is one of Norway's biggest parking companies. They have many parking lots where they use automatic number-plate recognition (ANPR). You can just park your car, and then after picking it up you have 48 hours to go online to pay your bill. You don't even have to do any registration up front.
I got a tip that OnePark sends out the passwords for their accounts in clear text - and that is just never a good sign. So I decided to take a quick peek at their site.
Most stuff looked good, but I hesitated when I saw that the web form for updating the user profile actually sent the user's ID back. The user ID was that classic integer that we so often see and therefore - when there's a vulnerability - opens up for an enumeration attack.
I didn't want to destroy anyone else's data so I created myself a another new account and tested by passing that user ID in when doing a profile update with a third e-mail address and a new password. The site didn't complain, and voilร - I was able to log in to the other account with that e-mail address and password. ๐ฑโ๐ป
But of course, now the data was all replaced by mine. So I removed all other form fields than the username (e-mail address) and password from the profile update request. This worked just fine.
Inside the other person's profile it was now possible to get hold of all the personal data, including any licence plate numbers and see if there was payment information added to the profile. The payment information was securely stored at a third-party site.
Since the user ID was an integer one could easily have set up a script to steal all the data. And not only that, one could also of course do vandalism by updating/removing the data. What's more - if one's a bit bold - one could register the licence plate of the car and remove it after the parking was paid for. I did not check if this would be easily spotted and trackable for either the customer or OnePark.
I found a contact form where I described the issue with the account takeover.
I got an automatic reply that they had received the e-mail.
I got a phone call (even though I didn't give them my phone number) from one of the chiefs of the company that apparently was responsible for OnePark's web solution.
He thanked me a lot and clearly was proud that they had fixed the issue in just three hours. Three hours? Well, my message took 9.5 days to reach the people that could actually fix the issue.
While the handling of this company was good, OnePark seems to have quite a way to go on how to receive and handle issues around their online security.
Just after the phone call I also received an e-mail from the same chief - again thanking me and telling how they dealt with the issue.
Just like the security issue with the power company Norgesnett this is a case where the authorization check for updating the profile fails. And because of the usage of an integer as user ID it was possibly to systematically exploit the issue.
It is quite common for software developers to trust the authentication, but then forgetting the authorization check and user input sanitization.
And yet again we can see our personal data open for anyone to stealโฆ
]]>robots.txt is a unfortunately often a source for finding links to parts of websites that should not be publicly known (or even be on the Internet in the first place). I've written a few lines of JavaScript to make it quicker to visit all the links in this file.
After that you can just click the bookmark when visiting websites' robots.txt (like my) to get them linkified and even all their links opened with just one click of a button.
A bookmarklet is a bookmark stored in a web browser that contains JavaScript commands that add new features to the browser. Bookmarklets can be useful tools, e.g. for increasing the readability of web pages, do searches, create short urls, etc.
Here's the few lines of source code without minification and without URL encoding.
/* * ---------------------------------------------------------------------------- * "THE BEER-WARE LICENSE" (Revision 42): * http://github.com/roys wrote this file. As long as you retain this notice * you can do whatever you want with this stuff. If we meet some day, and you * think this stuff is worth it, you can buy me a beer in return. Roy Solberg * ---------------------------------------------------------------------------- */ javascript: (function () { console.log('robots.txt linkifier v1.1; https://blog.roysolberg.com'); if(location.pathname != '/robots.txt'){ if(confirm('Do you want to navigate to /robots.txt? You need to run the bookmarklet again to linkify it.')){ location.href= '/robots.txt'; } return; } function openLinks() { var links = document.links; if (links.length > 20) { if (!confirm('There are ' + links.length + ' links. Are you sure you want to open them all at once?')) { return; } } console.log('Some browsers will block opening links this way.'); for (var i = 0; i < links.length; i++) { window.open(links[i].href, '_blank'); }; } var base = location.protocol + "//" + location.hostname + (location.port && ":" + location.port); var html = '<body style="font-size:120%;"><script>' + openLinks.toString() + '</script><button type="button" style="width:200px;height:40px;font-size:120%;" onclick="openLinks();">Open all links</button><div style="font-family: monospace;">'; html += document.body.textContent.replace(/(Allow|Disallow): (\/\S*)/g, '$1: <a href="' + base + '$2" target="_blank">$2</a>').replace(/\n/g, '<br/>'); var win = window.open(); win.document.write(html); win.document.close(); })();
The software and source code is released under the beer-ware licence ๐ป๐ป.
The source code is also available at GitHub.
Why don't you try out my bookmarklet game DOM II: JavaScript Hell? ๐
]]>Possibly millions of customer records (name, address, e-mail and phone number) from PostNord was exposed through unused API fields in a parcel tracking page used in Norway. The API has been online at least since 2013. The security issue was discovered after a parcel delivery from Komplett.no (Komplett Group AS, Norway) and the issue was also reported and handled through Komplett.no.
Who: | PostNord. Confirmed, but maybe not limited to, PostNord Norway. Reported to Komplett.no. |
Severity level: | Medium |
Reported: | May 2018 |
Reception and handling: | Good |
Status: | Fixed |
Reward: | Thanks and a gift card at Komplett.no for 500 NOK (60 USD). Issue was reported to and handled by Komplett.no. |
Issue: | Page showing tracking information about parcels was leaking name, full address, phone number and e-mail. Parcel tracking code was guessable. At least Norwegian parcels affected. |
I had just bought a new phone from Komplett.no. The phone was sent with PostNord to my local store. When Komplett.no sent the package, I was e-mailed a link to a page with tracking information (minside.postnord.no, "min side" = "my page"). When it arrived at the local store, I was sent an SMS from PostNord with the pickup code and the same link to the tracking page.
The tracking page contained more information in the backend call to get parcel and tracking information than what was displayed on the page.
PostNord delivers, among other postal and shipping services, parcels in the Nordics. Norway was affected. It is possible that the parcel service in Sweden, Denmark, Finland and Germany also was affected. Norwegian tracking number can be checked on other PostNord tracking pages (e.g. international and Swedish page) so I find it likely but unconfirmed.
According to privacy policy of Komplett.no the client data is stored up to 36 months in PostNord's databases.
PostNord AB had according to 2017 numbers, 17.2 million parcels delivered in Norway, 97.7 millions in Sweden, 47.2 millions in Denmark, 8.3 millions in Finland, 15.5 millions in Germany. 154 million parcels in 2017 and 142 million parcels in 2016. At least 450 millions parcels over 36 months. It's unknown to me how many of these have tracking numbers that can be viewed on the tracking page. At least Norwegian tracking numbers can. A good guess for number of parcels in Norway is around 50 million parcels within 36 months. Number of affected customers should be in the millions.
Komplett.no is part of The Komplett Group and is the largest e-commerce player in the Nordic countries. Head quarters in Sandefjord, Norway. Reporting revenue was MNOK 8,100 in 2017. 1,600,000 active customers with one or more orders the last year. Numbers according to Canica (owner). They pick and send packages 24/7 with a average of 3 per second i 2017. Should be around 95,000,000 (95 mill) packages per year. Komplett.no sends package with Posten Norge (state-owner company, owned by Norway) and the Norwegian parcel service of PostNord AB (state-owned company, owned Sweden and Denmark).
Small curiosity: No more information than Komplett.no have declared in their privacy policy (Norwegian text) was exposed. I am confident that the privacy policy is telling the truth in the chapter about sharing information with PostNord. Thumbs up!
The page had a URL like https://minside.postnord.no/public-services/tracking/7070205547XXXXXXX (where X was all digits in the tracking code). The tracking codes are GSIN tracking numbers. They have a prefix and a checksum as last digit. They are not far from auto increment. See update below for details. The tracking number is displayed on labels that are printed and put on the packages. They are also sent to the client by the shipping company or the e-commerce company.
The GUI on the tracking page contained information like city of origin, city of destination, opening hours of pick up point, weight, tracking information. It does not show name og the full address of origin or destination.
Inspecting this page in Chrome Developer Tools I found that the REST response contained more information than in GUI. In addition to more detailed information about the package, it contained name, full address, phone number and e-mail for the recipient. It also contained the name and address of origin.
I looked at 3 IDs around the tracking ID I got from my Komplett.no package. They where all Komplett.no packages containing 6 identifiable names (e-mail/phone contained one name and name on package was a different one). On my phone I had an SMS for a package from the company Forbruksimport.no AS back in 2016. The link was still active and my full name, address, company e-mail and phone number was present in REST service. Changing tracking codes around this number I found another package from two other parties (a school and a printing company). This confirmed that both old tracking codes was active and that other PostNord customers was affected (not Komplett.no). I did not confirm any foreign tracking numbers (don't have any).
All checking of tracking numbers was done manually in the GUI in Chrome. Unless PostNord had mass download protection, I think scripting a download of the whole database would be trivial.
26th of May - Update regarding tracking number (Thanks, Jonas!):
The tracking numbers are G1 numbers and are detected as Global Shipment Identification Number [Norway specific info, in Norwegian] by GS1's check digit calculator. They have a "company" prefix and shipper reference starting at variable positions and have variable length [ref executive summary]. The last digit is a check sum.
These are some of my GSIN tracking numbers:
The 707 prefix is Norway according to List of GS1 country codes on Wikipedia (700-709 = Norway). Both my PostNord tracking codes and the one from Posten Norge have the same 707 prefix. Other tracking codes I have in e-mails from Posten Norge seems to have 707 prefix. I had one with 704 and Roy found one with 705 prefix.
I have not been able to identify what prefixes PostNord uses, if large e-commerce companies like Komplett have their own prefix or what prefix Posten Norge have. I still believe the tracking number are largely auto increments.
27th of May - Update regarding time frame:
After a bit of searching on Google for keywords in the JSON output of the service, I was able to find two paste bins from 2013 and 2014. Both outputs had tracking information plus name and full address. They did not have e-mail and phone number. Both were packages from Komplett.no.
The URL was present in the one from 2013. It is still active on tollpost.no and I could check my on tracking numbers there:
The tollpost.no domain redirects to postnord.no, but not for this service. Tollpost Global AS was accuired by PostNord AB some years back. Testing the same thing on postnord.no, the same API service returned
I think it's fair to say that the service has been online for above 5 years.
- Personal data secured by predictable tracking number (ID)
The combination is a leak of estimated millions of customers name, full address, phone number and e-mail address.
The solution with guessable tracking numbers have both advantages and disadvantages. A number with a checksum is easier to write than something in the length and complexity of UUID. The guessable/predictable part of the number (autoincrement + checksum) makes it insecure. Everybody can find valid numbers.
Given that they don't switch to something more secure, they can't give out personal data based on this tracking number. PostNord seems to be aware of this, as they have texts like "Due to security reasons we cannot show the recipient's full name and address. This is the postal code and city to where we will deliver the parcel." on another tracking solution they provide.
The handling was fast, so I'll give the numbers in hours instead of days. Komplett did exactly what they should when notified. They gave a preliminary reply within a short time frame and responding in more detail the morning after. The issue was fixed faster than I expected. The reception was also good as it seemed that they were happy to get the notification.
Notification was sent to PostNord (Data Protection Officer), Komplett.no (CEO, Data Protection Officer and a contact e-mail regarding personal data) and a copy to The Norwegian Data Protection Authority. The notification contain details about me plus 3 identified Komplett.no customers as example. The 3 customer profiles identified 6 persons (name identified one, e-mail identified another). I also included my 2016 parcel and example which identified a school sending a package to a printing company. I felt that this was enough to get the attention of somebody at top management level to get it fixed in a rush.
I did not think I had found good addresses to contact in PostNord, but I had a better feeling about addresses in Komplett. Smaller company, usually responsive to customers. I also asked the Komplett Chat for an address to notify about security issues. They had none. I did manage to find the name of and get the e-mail confirmed for the security chief, but the e-mail bounced.
Neither PostNord or Komplett had security.txt on the domains I looked at.
Komplett thanks for the notification and confirmed that the message was received. This is a good sign that the message was sharp enough that people read it in the evening and sent to the right people. Really important to answer quickly, like Komplett did, if you get a message about security issues. Often the problem is that nobody answers.
The next morning, I got a friendly call from Komplett. They thanked me again and confirmed that they were in contact with PostNord and that they were on the case.
GDPR is launch in Europe on this day (25th of May 2018). Not active in Norway until 1st of July, but active for all EU citizens in our databases. A lot of privacy policies have already been updated i Norway. Happy GDPR Day!
Phone call from Komplett again. Again thanking me for the notification. I was told that the issue is fixed. I quickly verify it after the conversation.
Just a few minutes later, the Data Protection Officer from Komplett confirms the mandatory incident report to The Norwegian Data Protection Authority was sent. In Norway any unauthorized disclosure of personal data (e.g. data leak or client report sent to the wrong address) must be reported. The reports will be public (a few details might be withheld).
This is quite a large number of affected customers with their name, e-mail, phone number and full address neatly displayed along with their last parcel delivery. The leak is bad.
PostNord know about the issue of displaying names and other information based on tracking number.
]]>The company Ariane had a leak in one of their newsletter software installations causing an exposure of something like 1.5 million hotel reservations with hotel name, reservation number, dates of stay, customer name, customer e-mail address and possibly room number. A number of hotels were affected and the data went for like two years back in time and also included future stays.
Who: | Ariane (and therefore some of their customers) |
Severity level: | Medium |
Reported: | April 2018 |
Reception and handling: | Good |
Status: | Fixed |
Reward: | A gift card for 1 night for 2 persons at a Thon hotel (provided by Thon which was the hotel chain I reported the leak to) |
Issue: | Information leak with personal information related to hotel reservations |
I had a talk at OWASP Norway's meetup in Oslo and therefore stayed the night at Thon Hotel Rosenkrantz Oslo.
I did the reservation online directly with the hotel as it was cheaper than via hotels.com. And because of the direct booking the hotel started sending me different e-mails regarding my stay. Some of those e-mails led me to an unprotected website.
All the e-mails from the hotel had links named "View in browser". The e-mails directly regarding my stay linked to a Microsoft Azure application at some "random" cloudapp.net subdomain.
What hit me first was that the links were served over http and not https. And instead of a nice shorter URL pointing to Thon, it was a long one with a path signaling that site was used for more than my hotel chain. The query parameters contained a big integer as an ID and my e-mail address. So the natural thing to try was to remove the e-mail address from the query parameters. To my surprise I still got my details back.
Then I tried my ID - 1 and got another person's booking. I never download a lot of data as I don't want anyone to question my motives, but I do like to get an idea of the scope of a data leak, so I did a few tests to see if I could see how many bookings this was. My ID was past 2.37 million and the lowest that I saw working was around 865 thousand, so I estimate that more than 1.5 million records were available.
It was possible to traverse the URL path and get to a generator/preview function of a lot of different types of e-mail templates (for check-in details, receipts, room number reminder, etc.) for a long list of hotels.
By changing the templates it was possible to retrieve different information about a booking. E.g. one template would include the room number, while another would include dates and the customer's name and e-mail address. Judging from the e-mails I received it would in some cases be possible to check some people in or out.
Just by doing a google search with the subdomain I got a page that looked like a login page for the whole system. That page was also served over http.
The bookings seem to range back from 2016 and also include future stays.
Also everything was served over an unencrypted connection so someone could potentially listen in and get the information.
The list of hotels affected by this security vulnerability in Ariane's system is longer, but as I only did a few tests so I only observed these:
Ariane has stated that most affected hotels are in Germany and France (Norwegian link). In the same article they are quoted saying that they cannot be sure that this issue has already been taken advantage of.
I couldn't immediately see who was responsible for the whole system so in the afternoon I sent an e-mail to Thon Hotels' customer service. I got an automatic response giving me a hint that they would not read that e-mail until the day after, so I also sent them a direct message on Twitter saying that they probably wanted to check out the issue right away.
Just two hours later I got a reply thanking me and saying that the information was past on to the web development department.
I tested the URL in question and saw that they had fixed the issue where one could access anyone's booking without also knowing the e-mail address.
I got an e-mail from the chief of security in the group owning the Thon Hotels where he thanked me and asked for my details to send a reward - a gift card which I received just a few days after that.
For once I did things a bit differently and worked with the media before I published the case here myself. NRKbeta covered the story less than a week ago (Norwegian link only, sorry). NRK is the Norwegian government-owned radio and television public broadcasting company, and the largest media organisation in Norway. They also featured it as as the top story on nrk.no of their front page for some time. I'm happy to see that big media companies like NRK cares about online security and our personal data.
Is this leak so bad? Most people can handle having their name, e-mail address and reservation stolen or being open on the Internet forever. This is still a pretty bad leak. The number of reservations was pretty big. Maybe someone was already taking advantage of it? It would be possible to regularly check the bookings for public persons or other individuals. It could also be circumstantial evidence for some person being at a certain place at a certain time.
Also, with information like this it would be pretty easy to do some kind of spear phishing - to use the information to target and deceive a hotel customer.
I think we all expect our hotel to keep our personal details safe and secured.
]]>The electric power company Norgesnett had a security vulnerability that made it possible to get access to thousands of customers' personal info + their usage data. This was probably also the case for quite a few of the hundreds of customers of the company Enoro - the creators behind the vulnerable software.
Who: | Norgesnett and Enoro |
Severity level: | Medium |
Reported: | February 2018 |
Reception and handling: | Good |
Status: | Fixed |
Reward: | A thank you |
Issue: | Information leak with personal information, power usage data, audit reports, meter number |
All electricity consumers in Norway will receive smart meters by January 1st 2019. There has been a little bit of controversy in regards of the meters. The most extreme skeptics are afraid of the radiation from the new meters as they typically communicate back to the so-called distribution system operators (DSO) via radio or the mobile network. Then you have the ones that are afraid that the electricity will become more expensive - at least for families that don't have that much flexibility in regards of when they need to use electricity. And then thirdly, you have those concerned about data security and privacy because of the frequent readings done by the power companies.
The Norwegian Data Protection Authority (Datatilsynet) has written a bit about the new smart meters (Norwegian only) and how they can in theory be used to track individuals and both reveal and predict if people are and will be home at a certain point in time.
I also think the article from tu.no about smart meter security (Norwegian only) is pretty interesting in this context.
The new smart meters come with a Home Area Network (HAN) interface where you can get more details about your power usage. My house is a smart home and I want to integrate and use the data available through the HAN interface (which sends OBIS messages via M-Bus). So, around the time I got the new meter I logged into Norgesnett's site to get more information and see what kind of meter data that was available. I used this opportunity to check if Norgesnett protects my data..
When logged in to Norgesnett's site I had the Vivaldi developer tools open and took the regular look at source code, network calls etc. Most of it looked pretty good.
Norgesnett has this feature where you can add other "customer relationships" to your main account. Using that feature you can easily switch between your different accounts. To add another customer you need their customer ID and 4 digit PIN. The customer IDs seem to be just an incremental integer. Maybe one could get hold of other users' PIN?
They have also have this online form where one can change one's own personal data. For some reason the customer ID is posted as part of the form. I asked for a friend's customer ID and quickly found out that I could post with his customer ID and an e-mail address of mine.
After that was done an e-mail was automatically sent with both my friend's PIN and a direct link to finish the connection between the e-mail address and customer ID. The link didn't work for some reason, but with the PIN I could add the account as a "customer relationship" to my own account.
If the other user had specified the e-mail address for getting alerts, one could even change back the e-mail address and no one would ever notice that the account was accessed. Of course, one can hope there is some kind of logging in place that potentially could catch up on this.
Using some of the wordings and URLs used for the login page it's easy to find other of Enoro's customers who have the same customer system in place. And there's a quite a few.
I hope they don't have unlisted and secret addresses available.
The power usage is not reported at near realtime on Norgesnett's customer faced website, but rather weekly + start of month. Hopefully they would have noticed that something was going on if this was to be taken advantage of in a large scale.
This is speculation as I have not tried to confirm the vulnerability for other Enoro customers than Norgesnett (and not more than one other customer), but a quick Google search makes me believe at least the following 14 power companies have the vulnerability:
There could absolutely be more companies than these as well.
In Norway we can have separate companies for electricity distribution ("nettselskap") and electricity retailing ("kraftselskap") which makes some persons appear multiple times in those numbers.
I wrote an e-mail to Norgesnett's customer support in the evening telling about the issue. I immediately got an automatic response.
Around noon I got a reply back thanking me and telling me they had relayed the message to their system vendor (Enoro) and that it should be fixed shortly.
I never heard back after that, not even when I told them I was going to post this, but I got a confirmation from a journalist that Enoro said the issue was fixed.
I don't really want this to be a discussion about smart meter security. Unless someone hacks the firmware on your meter no one should externally be able to track individuals. In the case of Norgesnett it also would be hard to track if someone in a house is on vacation.
I think of this as yet another case showing that your personal data is not safe; it's long gone. Close to all your personal information is already in the hands of anyone who wants it. But I do hope that power companies in general have their security in order.
]]>Gator Watch had the worst security I've seen in an online service in a long time. Now, at least the company selling Gator Watch in Norway, has released new watch firmware and new mobile apps to tackle all the issues. And what they have done is actually really impressive.
Early August 2017 I found out that the Gator watches could be tracked, locations could be spoofed, and private voice messages was openly available on the Internet.
Shortly after, the Norwegian Consumer Council (NCC) also did a check on Gator Watch and a few other brands and found the same issues + illegal or non-existent terms and conditions.
The whole so-called
#WatchOut campaign led to a hectic and probably stressful few months for the companies Gator Norge, GPS for barn, Tinitell and PepCall (Xplora). The Norwegian Data Protection Authority (Datatilsynet) even forced Gator, PepCall and GPS for barn to stop all processing of personal information (Norwegian link) until they had fixed issues regarding their information security.
The Norwegian company sellling Gator Watch - Gator Norge - has since released new watch firmware and created completely new client apps. I wanted to know if the security was in order before using the watch for my own family.
The old firmware for the Gator Watch was pretty bad. The communication between the watch and the server was in clear text with out any encryption at any level. Also, there was no authentication or verification of the user. There was no session identifier, only the always fixed IMEI number. You can't really make it much worse than that.
Gator Norge has released new watch firmware and a description on how to do the upgrade (Norwegian link only, sorry). The process for upgrading the firmware for Gator 2 and Gator 3 is pretty complex. It's 34 long steps that could scare most people. Even me with some tech background met one issue due to me misunderstanding one of the steps. Also, the upgrade must be done using Windows. But one has do to do what one has to do, so I did the upgrade twice - once for a Gator 2 watch and once for a Gator 3 watch.
Using the old firmware I could just change the server the watch used to learn how the watch communicated. Then I could use that knowledge to try the communication with the server pretending to be a watch. With the new firmware I don't know how to do that. Also, reading or reverse engineer the firmware is out of my expertise. This means I haven't been able to look at the communication between the server and watch. I have to trust Gator Norge when they say it now is encrypted. I also hope that it isn't possible to easily spoof other watches. Hopefully someone will take the time to analyze the new firmware that can be downloaded from Gator Norge.
The old apps and server APIs for the Gator Watch was some of the worst product I've seen in a long while - security wise.
Gator Norge trashed the old apps and server and replaced it with brand new software. I did an analysis of the Android app and also took a look at the server.
What is interesting is that this new Gator watch app - called TeleGAPP - isn't a new concept from Gator Norge. They actually released this app in September 2017, but as a app-to-app-only way of tracking friends and family. I took a brief look at the security later that fall, and I found multiple security holes. However, I never got around finishing my work or even reporting them. Shortly after I learnt that this app would soon become the new Gator watch app. So I postponed the rest of the testing - and reporting - awaiting this new version of the app. In retrospect I see that I should have reported my findings - even though it was unfinished work - I had back then because of the delays of the new app. I will probably write another post on that matter as it does have some interesting points. They have now seem to have taken down the server used back then.
TeleGAPP - utilizes a custom-made certificate pinning. This meant that I couldn't just use a regular HTTP proxy out of the box. While I see the upside of having such pinning it sure makes it much harder to simply check if an app or service is secure. The chances that a good guy will skip testing is very high, while a bad guy would go ahead anyways. In total this typically leads makes Internet less secure.
I recently posted a guide on how to crack Android apps which explains exactly what is needed for cases like this. A simple smali one-liner like return-void was all that was needed to get around the pinning.
After rebuilding the app I could use the HTTP proxy Charles to see what the encryption, authentication and data looked like. And I have to say; this new app is a whole different ball game. This app has exactly the security measures I would've expected to find in the first place.
I could try to go through all security features and what makes the app less vulnerable, but I suppose I wouldn't be able to make a complete list. All HTTP calls are of course encrypted. The authentication ends up with a bearer authorization token. The token is short lived (though even as short lived as it makes the app a bit buggy with HTTP calls failing and without proper automatic re-authentication (which in ends up as a bad user experience with error messages popping up too often (though I actually think they now have fixed in a later version))). The authorization seems to be in order. I wasn't able to access to resources or data that I wasn't supposed to have. In general the API seemed clean, secure and made by people that know what they are doing.
What I did not test, was making PUT that I should not be authorized to do. Too often I see that developers properly secure getting data, but not checking if others are allowed to update it. I typically do this if it's real quick and easy to create another user for me to test on or if I know someone else with an account that I'm allowed to test with. So I'll just assume that they have that in order as well. At least they have a good base here.
If we look past the bad user experience it was to manually upgrade the firmware (which is not needed for people buying new watches today of course), the new app is pretty different from the old one. Since they started from scratch they haven't added all the features back in, but my understanding is that they are working on it.
One of the features that isn't in the new app is the mode where you could listen in on the watch without anyone knowing it. This made the old watch in to a listenting device. I really doubt that we will see that feature coming back.
Another one of the features not available is the one to - from parent to kid or the other way around - leave a voice message if you are unable to get hold of the other.
A third feature now missing is the one where you could set up geo-fences for areas like home, school, football field.
Personally it annoys me that the map where you can see the watch now doesn't have a satellite mode.
The latter two features I hope to see back in a coming version of the app.
Judging from what Gator Norge has said they have had third party companies developing the app and doing security testing of it. As I understand it they will keep having regular audits of the products - and that is really the only serious way of handling security in such a product.
In Norway we are very fortunate to have watch dogs like the Norwegian Consumer Council (NCC) and Norwegian Data Protection Authority (Datatilsynet). I think they do a great job. It's so good to see them put the focus on IT security and to force through changes. It makes us all more safe and secure.
And I'm also happy to see - at least the Norwegian version of - Gator taking this seriously, being humble and saying sorry, and to return with a much more secure product. I for one, will let my kids wear this watch now.
]]>This tutorial for how to crack Android apps is one of my more technical posts. If you aren't a developer you might want to skip this one. :) I'm assuming some basic knowledge of UN*X, Java and Android.
Sometimes I like to check if online services I use really are secure. I've presented quite a few cases to prove that they very often are not. Mostly I can use very simple techniques to check the security as there are so many basic security vulnerabilities out there. When it comes to apps I often use a HTTP proxy like Charles to take a look at the HTTP and HTTPS traffic. However, once in a while there are apps that use e.g. HTTP tunneling or certificate pinning. In those cases you need to go one step further to be able to listen to the network traffic.
Other reasons to decompile apps could be to recover lost source code, to inject language translations or even fix a bug. But hey, remember, don't do anything you are not allowed to. Don't break the law. This guide is just for educational purposes when you have legitimate reasons to do what you do.
These are the topics that I'll cover.
Very often you don't have to get your hands too dirty getting the hands of a decompiled app. There are some good services out there that can provide you with most Android APKs, and then even some to decompile them.
To get hold of an APK you can typically just google the package name. There are quite a few sites where to download them from. Some are more frequently updated than others. Note that you can get hold of different versions and the APK for different architectual platforms.
A word of wisdom: Don't download and run some random APK out there (at least do it in a sandboxed and/or emulated environment). There are quite a few sites that serves bogus or altered APKs. The app might look allright, but still have some malware injected. Don't blindly trust the ones that I recommend either. If the APK is signed with the same key as an APK that you got from Play Store you should be able to trust its origin (though there have been cases of private keys in the wild (even repackaged APKs uploaded to the vendor's own web site)).
Here's a few you might want to try out:
The quickest and easiest way to decompile an APK is to just use an online service. You just upload the APK and get an archive with all the resources and decompiled files. javadecompilers.com is the one I have used, and I have been pretty happy with it.
As you might know, the APK file is really just a ZIP file, so you can typically just rename it to .zip and double click it or run unzip and you can start investigating the app. If it's a hybrid app you might not have to decompile it at all to get access to everything. Actually, the Gator Watch app was a hybrid app and gave away everything with little effort.
You need to have at least the Android tools and SDK, but for most people I would recommend to just install Android Studio and follow the instructions to set it up as normal (but skip stuff like the SDK for Android TV and other stuff that will slow down your download).
Apktool can be installed manually, or if it's available via your package manager you can just install it using a command like apt-get install apktool.
The first step of the reverse engineering is to get hold of the APK. I'll use my own Android app Developer Tools as an example app. It's open source and if you want you can get the source code and APKs from GitHub.
The command-line tool adb (Android Debug Bridge) is used for all communication with the device or emulator. You can find the tool in the Android's installation folder platform-tools.
$ย # Lists all packages: $ adb shell pm list packages <loong list of apps /> $ย # Simple way of searching for packages: $ย adb shell pm list packages |grep roysolberg package:com.roysolberg.android.smarthome package:com.roysolberg.android.datacounter package:com.roysolberg.android.developertools $ย # Get the path of a package: $ย adb shell pm path com.roysolberg.android.developertools package:/data/app/com.roysolberg.android.developertools-1/base.apk $ย # Get hold of the APK actual APK file: $ย adb pull /data/app/com.roysolberg.android.developertools-1/base.apk /data/app/com.roysolberg.android.developertools-...file pulled. 25.2 MB/s (2035934 bytes in 0.077s)
It's increasingly common (and required for Play Store releases in the second half of 2021) that apps use Android App Bundles. This adds a layer of complexity when cracking apps.
When the app is an App Bundle you will in the above example see more than one APK file. Typically you will see base.apk
(the common code), split_config.arm64_v8a.apk
(config for the CPU architecture), split_config.xxxhdpi.apk
(config for the screen resolution) and typically a split for the language and maybe some dynamic features.
You need copy all the APK files - either in separate adb pull
commands or pull the whole app directory.
The next step is to unzip and decompile the APK. Apktool does this for us.
$ย # Decode the pulled APK into a directory named base: $ apktool decode base.apk $ย # d works as an alias for decode: $ apktool d base.apk
If you are interested in the core code and not the resource files you only have to consider the base.apk
files and not the rest. But note that you need to either decompile and compile them (or at least unzip them and remove the META-INF
folder before zipping again) if you need to resign the app as shown in a later step.
This is where the hard work starts. The code files are now fully readable, but the code is now in the smali format. You can think of smali as a sort of assembly language.
As an example we'll first change the language string app_name to Hacker Tools.
$ย # Edit the main language file: $ย vi base/res/values/strings.xml
Then we'll change some hard coded text so that we have changed both resources and actual code.
$ย # Search for file we want to change: $ย grep -nr 'originally' base/smali base/smali/com/roysolberg/android/developertools/ui/activity/MainActivity.smali:651: const-string v4, "This app was originally just created for myself to make some development tasks a bit easier. I've released it to Play Store hoping that someone else might find it useful.\n\nIf you want to get in touch me, please send me a mail at dev-null@example.com.\n\nPlease note that I take no credit for the third party apps." $ # Edit the smali file and change the string value: $ vi base/smali/com/roysolberg/android/developertools/ui/activity/MainActivity.smali
There's a way out if you rather want to read Java instead of smali. The excellent tool jadx provides both a command-line and GUI tool for converting dex to Java.
You can open the APK files directly in the the program and have all the code decompiled and converted. Reading Java is after all easier than reading the smali format. Just note that you cannot simply change the Java code and recompile it with jadx. You can always try to get a project up and running in Android Studio, but it will typically take some major effort as jadx seldom can fully decompile everything 100%.
There are quite a few steps getting everything together. We need to rebuild the app, sign it, zipalign it, and then install it. If the properly signed app is still installed it needs to first be uninstalled as our signature violates the existing one.
The command-line tool zipalign is needed to align the contents of the APK for Android to be able to run it. You can find the tool in the Android's installation folder build-tools/<some version number>.
ย $ # First build a new APK with the changes: $ย apktool build base -o base.unaligned.apk $ # Sign the app using the Android debug certificate generated from Android Studio installation: $ jarsigner -verbose -sigalg MD5withRSA -digestalg SHA1 -keystore ~/.android/debug.keystore -storepass android base.unaligned.apk androiddebugkey $ย # Align APK: $ย zipalign -v 4 base.unaligned.apk base.aligned.apk $ # If original app (with different signature) is installed it must be uninstalled. $ # Please note that you will lose any app data you have. $ adb uninstall com.roysolberg.android.developertools Success $ # Final step is to install the newly altered app (-r for reinstall (keeping the app's data)): $ adb install -r base.aligned.apk Success $ # To keep an eye on the log and what's going on you can use logcat: $ adb logcat
That's it! :-)
If you have multiple APK files and you just signed the altered APK file with a new certificate you will need to sign every single APK file with jarsigner
and then zipalign
them.
Installing them will need the install-multiple
option and all APK files as arguments:
adb install-multiple -r base.aligned.apk split_config.xxxhdpi.aligned.apk apk3.aligned.apk apk4.aligned.apk
It might take a little bit of getting used to, but reading smali isn't all too bad. If you have any concrete problems you'll find the answer with some googling. But a good tip is to create some small very simple Java classes yourself and check out what they look like in the smali format.
If you are having trouble navigating the smali code and understand the flow of an app you can use the following smali code. It will call Thread.dumpStack() which logs the current thread's call stack.
invoke-static {}, Ljava/lang/Thread;->dumpStack()V
If you need to know the value of a string - e.g. a parameter - you can use Log.d(String tag, String message) to log it to the system log.
const-string/jumbo v0, "YourTag" invoke-static {v0, p1}, Landroid/util/Log;->d(Ljava/lang/String;Ljava/lang/String;)I
Very often - but not in the case of my Developer Tools app - the code will be shrinked and obfuscated using ProGuard. This makes the code a lot harder do read and understand. There aren't really any good ways around it, but doing the thread dump trick and taking your time to follow the code will eventually get you where you want to be.
If you have followed along the guide you would see the app change from the version on the left to something like the one on the right. One of the reasons I wrote this guide was for my own sake to have something to easily copy and paste from when doing some reverse engineering myself, but I thought this might be useful one for others as well. :)
]]>Try to "hack" https://ra.gl/ [link broken] . You can see the rules and goal on that site.
I give a few talks every year. The last years I've mostly talked about different mobile development topics, but because of this blog I have recently had the opportunity to talk about web application security.
Last week I gave a talk at Google Developer Group Bergen, Norway. The talk was about hacking web apps.
After the talk itself we had a session with some hands-on "hacking" of a web app. For this I had created a web site that had intentional "security vulnerabilities".
The goal of the assignment is simple: Just log in on the administrator page at ra.gl [link broken] and get hold of your unique keyword that proves your accomplishment.
I have some rules so that the site isn't ruined completely. It is after all hosted in a shared hosting environment and I don't want anyone else harmed.
You don't really need anything else then your browser's development tools. Personally I like to frequently use its "Copy as cURL" menu option and tweak the HTTP requests in a simple text editor.
The security vulnerabilities are the typical ones that I have found and presented on my blog. I you have read some of those posts you might have some clues on what it could be.
It's isn't a very hard task to break in. But that is actually part of the point. There are some many weaknesses with so many web apps today. With some knowledge and open eyes you can get far. If you are able to hit gold all the way you can solve it within some minutes, but most people seem to need more time.
I hope you enjoy this small assignment! Don't hesitate to give me feedback or if you have any ideas for improvements or other cool stuff that should be included. :-)
An online kindergarten service used by a lot of kindergartens was leaking a lot of data about all of the kids and their parents.
Who: | It's too long ago, so I won't tell |
Severity level: | High |
Reported: | 2013 |
Reception and handling: | Very good |
Status: | Fixed |
Reward: | iPad mini with engraved thank you |
Issue: | Information leak with data about kids and parents |
Early 2013 they started using an online system in my kid's kindergarten. The system contained personal info about kids as well as parents and was used for pictures and messages.
I really liked the system. It was very easy to do stuff like notifying if your kid was home sick, to see pictures of everyday life or to see if the sleep schedule was followed as it should. It also seemed pretty user friendly for the staff who had an iPad where they could easily click and register when a child was delivered in the morning and do all other communication with the parents. It was supposed to replace all post-it notes and paperwork.
When we were told about the system and got logins for it we also got a brochure telling "All data in <the system> is handled with a very high degree of security". Of course, a statement like that works pretty much as a cue for me to look into the security of the system.
It's hard to recall all the details years afterwards and this was before I kept any notes about my findings, so the technical description isn't very long or deep. I started out as I normally do; I was using the site while having the browser development tools open. In general stuff looked pretty good. There seemed to be proper encryption, authentication and authorization all over the place. But one of the challenges with web application security is that authorization is off by default and you have to actively add it and implement it correctly.
Even if the authorization seems okay, there is almost always this one place or function where it's forgotten about. Often it's that one functionality that was added later or that one that no one uses that much. The kindergarten system had a function for exporting all the data they had stored about your child. It was some sort of background job that ran asynchronously and was kickstarted by some URL. The URL contained an incremental integer ID named childId. The scope of the ID was the entire kindergarten (though the online system was running many, many kindergartens).
As you might have guessed, one could just change the ID and you started the background job for another child in the same kindergarten. When the job was done you got a downloadable ZIP file will all of the contents.
The URL for creating a downloadable archive with all of the contents belonging to the child didn't have an authorization check. It was possible to systematically download all contents for all children within the same kindergarten.
This is the data that was available:
Pretty late at night I sent an e-mail to both the manager of our kindergarten and the CEO of the whole kindergarten system.
Just 30 minutes later that night I got a reply from the CEO thanking for the report and saying that they would look into it immediately.
Early at night I got a new longer reply from the CEO. They had already closed the issue for all customers. While this was a classical programming error, the description of the system structure, storage and access control was pretty satisfying. And supposedly the kindergartens are not allowed to store any information that is regarded as sensitive (as defined by law), though I assume all parents feel that all that data about their child is pretty sensitive.
A couple of hours later I also got a reply from the manager of the kindergarten. It was a bit more "light" than what I would have wished for, with no critical voicing towards the vendor of the system, but he thanked me and said he was sorry for the incident.
Out of the blue the CEO sent me an e-mail asking if he had found the correct address on me as they wanted to send me a little something.
I have to admit that receiving an iPad mini and their general tone through the process probably was enough to hold me back from telling the media back then.
After I started this blog I noticed a tweet about a talk about vulnerabilities in an online kindergarten system. It's a great talk given by Halvor Sakshaug at NDC Oslo earlier this year.
From the screenshots I could see that this is a completely different kindergarten service than the one I used. So this means even more kids' security and information was vulnerable.
This issue has always been bothering me a bit. It was pretty serious, but I didn't disclose it publicly. It's one of many examples of serious issues that has absolutely no consequences for the company doing the harm and the users jeopardized are never informed about it.
With this issue as a backdrop it was one of the reasons I decided to start disclose security issues on my blog. People need to know that all their personal data is in the hands of anyone who wants it.
And with the mentioned other kindergarten system with vulnerabilities, and the security issues in the smart watch for kids that I discovered late this summer, I really feel even more that we need more disclosure and give more attention to IT security.
]]>I'm summarizing the 13 security issues I've presented on the blog over the last three months.
In the table below I've tried to show how different types of criminals can directly use the information from the different cases. Of course, combining sources would make you even more vulnerable, so I'll get more into that further down in this post.
Directly applicable for | ||||||
---|---|---|---|---|---|---|
Case | Jealous partner | Stalker | Kidnapper | White-collar criminal | Political hacker | Foreign intelligence |
#1 - Tryg + Infotorg | - | ✔ | - | ✔ | - | ✔ |
#2 - Acme | - | - | - | - | - | - |
#3 - Digipost | - | ✔ | - | - | ✔ | ✔ |
#4 - Acme2 | - | - | - | ✔ | - | - |
#5 - Sbanken | ✔ | - | - | - | - | - |
#6 - Orkla + Japan Photo | - | - | - | - | - | - |
#7 - Energi Treningssenter | ✔ | ✔ | - | - | - | - |
#8 - Acme3 | - | - | - | ✔ | - | - |
#9 - IKEA | - | - | - | - | - | - |
#10 - Memoria | - | - | - | - | - | - |
#11 - Gator Watch | - | - | ✔ | - | - | - |
#12 - Gjensidige | - | ✔ | - | - | - | - |
#13 - GoShopping | - | ✔ | - | - | - | - |
With jealous partner I'm considering persons who have some kind of abusive power and control or jealousy. They could make use of usage data like the time the partner entered the door at the gym or what he or she bought at the store at what time.
A stalker is a person with unwanted or obsessive attention towards another person. Using information leaks a stalker would be able to get more personal information (i.e. address, phone number, e-mail address) about the victim. And getting something like the victim's IP address would open for attacks on computer equipment which again can lead to more leaks of personal data (think your mobile phone with all your images, your e-mail, etc.).
Kidnappers would be able to use location data and other usage information to understand patterns and when it's a fitting time to commit the crime.
In while-collar crime I include identity theft and other types of finacially motivated crimes. Useful information could be Social Security Numbers (SSN), names, addresses, phone numbers, etc.
With political hacker I mean individuals or groups that have some kind of political motivation to get access to data about politicians. A list of people's names and IP addresses would be great news for trying to break into a politician's computer network.
I suppose some foreign intelligence organizations wouldn't mind getting an up to date high quality list of names, Social Security Numbers and addresses for most of the grown population in a nation. And for more targeted operations full names and IP addresses sure helps.
More often than not the security issues I have found have included some sort of personal information leak. In the table below I'm summarizing the severity and the leaks.
Case | Severity | Data leaked | Enumeration vulvnerability | Privacy threat |
---|---|---|---|---|
#1 - Tryg + Infotorg | Low to medium | SSN, names, addresses, birthdays, etc. | ✔ | ✔ |
#2 - Acme | Very low | - | ✔ | - |
#3 - Digipost | Medium | Names and IP addresses | ✔ | ✔ |
#4 - Acme2 | Critical | - | ✔ | ✔ |
#5 - Sbanken | High | Bank account balances | - | ✔ |
#6 - Orkla + Japan Photo | Low | Pictures and first names | ✔ | ✔ |
#7 - Energi Treningssenter | High | Names, visit logs, e-mail addresses, phone numbers, bank account numbers, pictures | ✔ | ✔ |
#8 - Acme3 | Critical | A lot of different company data | ✔ | ✔ |
#9 - IKEA | Low to medium | Names and locations | - | ✔ |
#10 - Memoria | High | Private messages | ✔ | ✔ |
#11 - Gator Watch | Critical | Kids' location, voice messages, phone numbers | ✔ | ✔ |
#12 - Gjensidige | Medium | Names, addresses, insurance details | ✔ | ✔ |
#13 - GoShopping | Low to medium | Names, addresses, order details | - | ✔ |
A lot of different personal data has been leaked. And looking at the cases you'll see that you can use data from one source to look up data in another.
The checkmark for enumโeration vulvnerโability indicates if it was possible to access all the data systematically or not. Only a few of them needed knowledge like a bank account number or e-mail address, so this is bad news for you as an end user.
While not all cases are directly applicable for criminals, almost every single one of them poses a threat to your privacy. This threat goes from you not surfing anonymously on the Internet to your home network being vulnerable for further attacks to your kids being tracked to your online shopping being exposed etc.
While the vulnerabilities alone are bad, combining them may make them more severe. So which of the 13 could have been used together?
In the table below I've marked the the cases in which there are some overlapping data that will make it possible to get retrieve more data or increase the attack surface.
Case | #1 | #2 | #3 | #4 | #5 | #6 | #7 | #8 | #9 | #10 | #11 | #12 | #13 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
#1 - Tryg + Infotorg | - | - | - | - | - | - | - | - | - | - | - | - | - |
#2 - Acme | - | - | - | - | - | - | - | - | - | - | - | - | |
#3 - Digipost | ✔ | - | - | - | - | - | - | - | - | - | - | - | |
#4 - Acme2 | - | - | - | - | - | - | - | - | - | - | |||
#5 - Sbanken | - | - | - | - | - | - | - | - | - | ||||
#6 - Orkla + Japan Photo | - | - | - | - | - | - | - | - | |||||
#7 - Energi Treningssenter | ✔ | ✔ | ✔ | - | - | - | - | - | - | - | |||
#8 - Acme3 | ✔ | ✔ | ✔ | ✔ | - | - | - | - | - | - | |||
#9 - IKEA | ✔ | ✔ | ✔ | ✔ | - | - | - | - | - | ||||
#10 - Memoria | - | - | - | - | |||||||||
#11 - Gator Watch | ✔ | - | - | - | |||||||||
#12 - Gjensidige | ✔ | ✔ | ✔ | ✔ | ✔ | - | - | ||||||
#13 - GoShopping | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | - |
I wanted to write this post to try to make it clear on why you should care about these issues. When I can find all this data with very little time and effort then this sure must be the tip of a very small iceberg in an ocean with a lot of very big icebergs.
]]>GoShopping - a company owning several online stores - let anyone see all your previous orders and order lines using just your e-mail address.
Who: | GoShopping |
Severity level: | Low to medium |
Reported: | July 2017 |
Reception and handling: | Poor |
Status: | Fixed |
Reward: | A thank you |
Issue: | Leak with all order details |
I recently returned to KitchenOne to buy some accessories to my coffee machine. I didn't have any account (I don't think you can have), but was a bit relieved and surprised when I during checkout could just enter my e-mail address and it would fill out my name, address and phone number.
That made me think. Is it OK that anyone can enter my e-mail address to a service and get back my full name, address and phone number? And maybe there could be more than meets the eye?
When I was at the checkout step I opened Vivaldi developer tools to inspect the network traffic. There was a Ajax call to the mother site GoShopping's CMS (they're using the open source ASP.NET CMS Umbraco) returning some JSON with the name, address and phone number. But the JSON contained more. It contained my previous order in full details including all items that I bought. And even my payment information was included.
The service for looking up the address from the e-mail address leaked the following information:
And then there's the question if the user wants it to be possible to look up his or her name, address and phone number using their e-mail address. What if you have some kind of unlisted address? This part has not been fixed, but is assumingly working as intended.
Monday night I sent an e-mail telling about the leak.
I got an e-mail back telling that they would look into the issue.
Having not heard anything back and not seeing any fixes I asked them for a status. I did not receive any reply on this e-mail.
I told them I would write about the case here on my blog that very same day.
10 minutes(!) later I got a reply telling that the issue would be fixed some time the week after. As a believer in responsible disclosure I decided to wait for them to release the fix.
I tested the leaking endpoint and found that it was fixed.
Would they have relased any fix if I didn't tell them I was going to do a write-up? I'm not so sure about that.
I discovered a similar less severe case with Power in September. Power is a chain selling consumer electronics. When you check out you can specify your phone number. If you have been shopping there sometime before they can fill out the check out form with name, and address. Seems okay, right?
There's a couple of problems here. The first one is that they also returned the customer's e-mail address. And this was what I complained about in my tweet to Power. They have recently fixed this and removed the e-mail address for the data returned.
The second problem is like in this case. Okay, so the company removes the biggest issue, but have you agreed to that it should be possible to look up your name and address using your e-mail address or phone number? What if you have an unlisted phone number? What if you have an unlisted address?
This case is a classic example of server endpoints returning more data than what is shown to the user - and this time the data really shouldn't be there.
I don't like when it takes more than 3 months to fix something that seemingly is so easy to fix. And I'm not sure they would have fixed this at all if I hadn't been following them up and if I hadn't had this blog. At least now the users' data is more secure.
]]>Ed Foudil has proposed security.txt as a standard method for making it easier to report security issues. It's a plain text file with contact info that should be located in the .well-known directory of a web site (or root of a file system). Currently it's a "Internet draft" that has been submitted for RFC review.
Over the last three months I have published 12 fresh security issues on my blog. While finding the issues has typically only taken a few minutes, finding somewhere to report the issue sometimes have be a real pain and very time consuming.
In most cases I have had to contact first-line customer support and try to write in a way that will ensure that they understand that they need to report this to the right party. Often this can work ok, but I have even experienced to be turned down by the customer support because they have not understood what I was trying to tell them.
In the case with IKEA I spent so much time trying to find the correct contact point. They had no general e-mail addresses or anything anywhere. I ended up e-mailing three press contacts with the issues, and it took three weeks before I got their attention and got to tell them what the issue actually was. And according to IKEA this is actually the way any security issue should be reported. security.txt would have been a much better solution.
In the case with the insurance company Gjensidige they managed to lose one of two reports before it reached the IT department. security.txt would have solved this nicely.
In the case with 1 million+ Norwegian Social Security numbers etc. exposed, the insurance company Tryg did not read the e-mails sent to the specified contact e-mail address, and Infotorg - who was responsible for the delivering the data - just stopped responding. It was probably a lost case for Infotorg, but at least Tryg would have been notified with a security.txt.
In the case with the company vulnerable for for SQL injection for 10 years I did not know if their customer support form worked at all as I never got a reply. Writing to an e-mail address specified in security.txt would've helped here.
In most cases the info would have been delivered directly to the correct persons instead of being delayed in some kind of first-line of customer support. You want it to make it easy to report a security issue, and you want the report to get to the right destination asap.
It's a really simple standard. And simple is indeed beautiful. security.txt should be a plain text file located in the .well-known directory of the site, just like a bunch of others as per RFC 5785.
The only directive that must be present in security.txt is Contact, which lets you specify either an e-mail address (maybe not very smart considering spam) or a phone number or a URI that provides contact info. The order defines the preferred method of contact.
For my own security.txt I have used Google's reCAPTCHA Mailhide and a link to my own Twitter account.
In order to ensure the authenticty of the security.txt one should use the Signature directive to sign it using either an external or internal signature.
The Encryption lets you add your key for encrypted communication, like your PGP key or similar.
The Acknowledgement allows you to link to a page where security researchers are recognized for their reports.
It's a proposal, so be sure to check out securitytxt.org for the latest update of the specification. Also, at the time of writing, you should take a look at the draft branch at https://github.com/securitytxt/ for the latest development. There's also some interesting discussions on the issue tracker of the same project.
Contact: https://example.com/security-contact-form/ Contact: https://example.com/mailhide/security Contact: 555-2368 Signature: https://example.com/.well-known/security.txt.sig Encryption: https://example.com/pgp-key.txt Acknowledgement: https://example.com/security-hall-of-fame.html
This is one is easy. Please add support for security.txt - as soon as you can - to make the web a safer and more secure place for us all.
]]>Gjensidige Forsikring - one of Norway's biggest insurance companies - was leaking information about customers' cancelled insurances to other customers. First they used 3.5 months to falsely conclude there was no issue, and then one additional month to fix it. Also, their web site can be abused for sending e-mails.
Who: | Gjensidige Forsikring |
Severity level: | Medium |
Reported: | May 2017 |
Reception and handling: | Poor |
Status: | Partially fixed |
Reward: | 375 USD worth of gift certificates |
Issue: | Personal data leak + possibility to spoof e-mails |
I have had some insurances with Gjensidige for quite some time. Luckily I haven't had much use for them, but from the little contact I have had I must say I'm a happy customer.
As my insurance company I'm logged in at their site from time to time. Back in May I also took a quick peek at their web site in regards of data security.
When I was logged in I opened Vivaldi developer tools to inspect the pages and network traffic. There were quite a few Ajax calls in the different pages asking for data to display.
As should be,for most calls the browser didn't specify any customer ID or anything like that. Those calls were safe. They do, however, have some calls that include some sort of ID, which is specified on the client side. Most of them seemed to do a proper authorization check, but as shown in the cases presented on this blog, there sometimes are exceptions to this.
The REST endpoint giving back a list of cancelled insurances did not check if the ID sent in by the client. This ID seems to be an auto increment integer and I could just step one number to get another customer's cancelled insurances.
The Curl command copied from the browser looked like this:
curl 'https://www.gjensidige.no/ip-web/forsikringer/annullerte/<customer ID>' \ -H 'x-klient-lokasjon: Meldingsboks' \ -H 'Applikasjon: INTERNETT' \ -H 'Accept-Language: nb-NO,nb;q=0.8,no;q=0.6,nn;q=0.4,en-US;q=0.2,en;q=0.2' \ -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.97 Safari/537.36 Vivaldi/1.9.818.49' \ -H 'Accept: application/json' \ -H 'Referer: https://www.gjensidige.no/no/1/Din+side/forsikring/administrer-forsikringer/forsikringsdokumenter-new' \ -H 'Accept-Encoding: gzip, deflate, sdch, br' \ -H 'Cookie: JSESSIONID=<session cookie>; <a bunch of more cookies>;' \ -H 'Connection: keep-alive' --compressed \ | python -m json.tool
I added | python -m json.tool just to get a "pretty print" of the JSON returned to the command line.
I reported the authorization issue using an online form on their site. When I submitted I noticed that the POST request in fact included the name and e-mail address from the sender, and also the e-mail address to whom it should be sent to.
This, of course, got my attention. I saw that you could craft your own e-mails, choosing your own contents, topic, sender and receiver. I copied the Curl command from the browser and easily changed it to send a "fake" e-mail.
E-mail spoofing is nothing new, but there are a few interesting points here. There should be no reason why you would need the send and receiver addresses sent from the client. That's what makes it possible to forge e-mails in this case. Gjensidige uses Sender Policy Framework (SPF) for their domain, but that wouldn't help in this case as the server sending the e-mails should be whitelisted. However, the server used in this case is not whitelisted, and Gjensidige has the rule ~all set. That rule allows for any servers to send e-mails (with SOFTFAIL) from this domain. So, this again means that you don't really need Gjensidige's server to send spoofed e-mails from Gjensidige. So, maybe this is a feature and not a bug? I hope that they don't think this is all right.
Having this issue opens up for nice opportunities for phishing, or just abuse their server to send spam.
The information leaked that I saw for cancelled insurances was this:
Some customers, if not most, have several types of insurances which would open for combining information about them. Now, I always minimize the amount of data I access to ensure no one can question my intentions, so I have to make an educated guess that there might even be other types of information on some customers.
Further more - and this is not fixed, so I'm not sure if they consider it a security issue - it's possible to use their own server to send fake e-mails from their own e-mail addresses.
It wasn't very easy to find the correct contact point, but the closest I got was a general contact form originally to be used for whistle-blowers. I submitted the form Friday night.
It was when submitting this form the first time that I noticed that - as described in more details in the "Approach" part - it was actually possible to use this form as a way to send e-mails from anyone to anyone through their site. So I submitted the form one more time telling them about that as well.
Early Monday morning I got a confirmation that they had received information about the two issues and would inform the right persons. They also asked for me for a phone number where I could be reached.
I wrote back and told them my phone number. I don't think they ever tried to call me.
Having not heard anything back and not seeing any fixes I asked them for a status.
In the morning I got a reply telling me that they had "taken care of it" and named another guy who would contact me when he would be back from vacation.
At night I got copied in on an e-mail from that guy where he asked a third guy about giving me feedback.
A couple of minutes later the second guy told me that he knew they had taken this seriously and worked on the issue. I would be contacted by a third guy when he was back from vacation.
I got a response from the Director of Group Security. He thanked me and quoted a security technician saying that the service in question would return an empty list if the logged in user wasn't authorized to get the list from the customer id it used for the request. The director also said that the reason it had taken me so long to get a reply was that they had gone thoroughly through all aspects of this issue.
Gjensidige used more than 3.5 months to "thoroughly" go through "all aspects" of the issue and falsely concluded that the issue was not an issue.
I wrote back and asked if I had misunderstood something and gave concrete examples of customer ids that actually would give data back.
I got a reply back telling me that they had forwarded this information to the person responsibly for the security of gjensidige.no.
I got more feedback telling me that they had confirmed that there was an issue and re-opened the case.
I got a new e-mail telling me that they had fixed and closed the issue in production 9 days earlier.
Writing this post I was surprised to see that the issue with e-mail spoofing remained. I had assumed they closed it earlier on. I sent a new e-mail asking about this.
I got a reply telling that the second report with the e-mail spoofing was stopped before it reached Group Security and the IT department.
It actually Gjensidige 4.5 months to fix an information leak of this severity.
Gjensidige has been very polite and nice in their communication, and also grateful for getting the reports. I would have said their reception and handling was great, had it not been for my own and other customers' data were accessible for all of 4.5 months after the initial report.
While I understand that not everybody has the flexible server environments like Digipost or Skandiabanken, even using 4 weeks from the second report before fixing it in production isn't very impressive. This is an insurance company. They need to ensure that their customers' data is safe with them.
]]>Gator Watch - a GPS watch for kids - is leaking data in all ends and anyone on the Internet can live track your kid. We're not talking about a security vulnerability, we're talking about non-existing security.
Who: | Gator Watch |
Severity level: | Critical |
Reported: | August 2017 |
Reception and handling: | Poor |
Status: | Not fixed |
Reward: | A thank you |
Issue: | Anyone can track any watch, listen to voice messages, fake location, etc. etc. |
I bought a Gator 2 smartwatch for my kid after reading some reviews about different watches. The concept was pretty good, but how good was the security?
The company behind Gator Watch claims to have sold more than 300,000 watches. As far as I can understand they are all trackable for anyone on the Internet.
I reported the issue in the start of August and was giving the vendor 90 days to fix the issues, so I was planning to publish this post in the start of November, but suddenly this case was all over the news.
The Gator watch is some places branded as Caref GPS Phone Watch. I'm not entirely sure if this is the same as the Gator 1 watch or if it's also Gator 2.
Looking back at this issue I really started in the wrong end. The biggest issues was so easy to spot without being very technical.
Playing with the watch and the config I saw that it's possible to set up which server the watch connects to..
First I tried connecting to the Gator Watch server via telnet trying to send HTTP commands. That failed, and I tried a couple of other commands, but the remote server just closed the connection on me.
Then I set it up to my own router's IP address, added a port forwarding to my machine where I was running a very simple Python script printing whatever came my way. I configured the watch to use my IP, and surely it started sending simple strings with stuff like IMEI, positions, battery level, etc. It kept the connection open. The connection was not encrypted and there was no type of session ID or token or anything. It always used the IMEI as the identifier. This was pretty shocking.
The quick-and-dirty Python script I hacked together to listen to the watch:
#!/usr/bin/env python import socket TCP_IP = '0.0.0.0' TCP_PORT = 17015 BUFFER_SIZE = 20 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind((TCP_IP, TCP_PORT)) s.listen(1) conn = None try: conn, addr = s.accept() print 'Connection address:', addr while 1: data = conn.recv(BUFFER_SIZE) if not data: break print "received data:", data #conn.send(data) # echo except KeyboardInterrupt: print '\nBye bye :)' finally: if conn is not None: conn.close() if s is not None: s.close()
I sent some of the commands I got to the Gator Watch server via telnet, and now it kept the connection open. This way one can easily spoof the position of anyone as long as you have the IMEI.
Here's some of the data received from our watch to my fake server (I've randomized the data and removed some newlines):
received data: (45353IMEIHERE416P02,GT03.V10.20170303,7,23201) received data: (45353IMEIHERE416P02,G,160805A6027.0330N00512.3931E000.41058310.00,6) received data: (45353IMEIHERE416P02,G,160805A6027.0352N00512.3879E001.411000074.23,6) received data: (45353IMEIHERE416P02,0,1,160805,220200,5) received data: (45353IMEIHERE416P02,0,1,160805,220200,5)
The data contains the software version, IMEI, time, location method (GPS vs Wi-Fi), location coordinates, battery left.
Sending the data to thet server you can get back who is allowed to call the watch. Like
#555-2368#Dad. So that makes it pretty easy to identify the family of the child.
Just as in my first case I set up Charles as HTTP+HTTPS proxy to listen to the communication between the Gator Watch server and Gator Watch app. I was shocked to see that there is no encryption between the server and the app.
What's more is that the app and server used an incremental integer as ID for the watch. I could - and still can - just change the ID and get the posisition of any other watch. And it didn't stop there. I could - and still can - easily download the voice messages left by any child or their parent. I changed the ID and got the position of a kid in Sweden. I only downloaded one other voice message, and it was appearantly from a Swedish parent sending some sort of first test message to his kid.
I downloaded the APK for the Android app and decompiled it. It was an hybrid Angular app. Again, there was no signs of the app using https for anything. The JavaScript source code gave away addresses to other servers and I found a URL with a what looks like an admin interface.
Being an Android developer I work most days with the phone's system log on the screen. What I didn't see until after I found out of everything, is that the Gator app actually constantly logs the URL which you can just open in your browser and change the integer ID in and get the position of any child wearing the watch.
The issues I saw were these:
Saturday evening I sent an e-mail to customer support of Gator Norway explaining the issues.
Tuesday morning I got a reply thanking me for sending them the information. They would check this out as soon as possible. And "We want our watch to be safe for both children and parents!"
Without my knowledge The Norwegian Consumer Council and security company mnemonic started their own testing of different smartwatch brands sold in Norway - among them the Gator 2 watch.
I asked for a status on the issue.
They used 3 weeks to get back to me, and told that it "would take some time" to fix this. I can understand that adding security on a completely broken system isn't done very easily. They also said that the Gator 3 Watch and Gator 3 app is already secured.
The Norwegian Consumer Council suddenly broke the news about security issues in smartwatches for children (Norwegian article) - among them Gator Watch. BBC also brought the news. (English article) This is when I decided to write this post with the technical details and all. There was no reason to wait the normal 90 days when this is out. It's important that as many people as possible is told about this.
If you ask me this is as bad as it can get. As a parent you want your kid to be nothing but safe. And when you buy a product like this you expect to make them more safe. But what happens is that you put your child at risk. Any predator can track your kid, and even start see patterns in when a child usually goes to e.g. school or after-school activities. It would even be possible to fake the position of the child, tricking parents to believe everything is fine or that their child is somewhere else than they are looking for him/her.
As a developer I just cannot understand how a product like this can end up on the market. Any developer involved in the project on any level would know that this is a really bad product. It's not like anyone mistakenly has screwed up. No one has cared to add any layer of security.
If your child is using Gator Watch I would recommend you stop him or her from doing that. Now.
For those of you wanting to go even deeper I would recommend the solid report by Norwegian Consumer Council and mnemonic (PDF, English, 49 pages). They also cover some other brands, go into more privacy issues and show how they did the technical tests.
Memoria - a digital memory book and social platform for persons in care - had a webapp with vulnerabilities for reading, changing and deleting others' messages and pictures.
Who: | Memoria |
Severity level: | High |
Reported: | August 2017 |
Reception and handling: | Good |
Status: | Fixed |
Reward: | A thank you |
Issue: | Users could read, alter and delete other users contents. |
Watching the TV one night in August there was this news story on TV 2 about a digitial memory book and social platform for communication between families, healthcare professionals and users of care services. A great idea and it seemed like a pretty good product. Of course, I wondered if their security was in order. I mean, this is a site with a lot of personal stuff, like messages, pictures and personal stories.
I created a profile and surfed around the site while having my browser development tools open. The site is running the good old Angular 1.X with a lot of Ajax calls transfering JSON with data.
The pages would be of the style hxxps://app.minmemoria.no/#/patients/<some patient ID>/albums. So what would be the URL be for some kind of administrator page?
I guessed hxxps://app.minmemoria.no/#/admin and was right. While there was some kind of authorization check I got partial access. I could e.g. see all the institutions in the system, and was able to create my own new institution. I did not try to delete any, but I wouldn't be surprised if that was possible..
Many of the URLs had some kind of ID, so I of course tried changing them seeing if I could get hold of other people's data. But the ID wasn't your regular incremental integer, so I had to create another account and see what kind of IDs that got. Now I was logged in with one user in Chrome and one user in Vivaldi. I'm still not sure what the system for the IDs is, but it is a big number that changes quite a bit from one entry to another. It doesn't seem to be a timestamp with milliseconds or seconds, but it doesn't change more than you would be able to guess or brute force other peoples IDs.
In general there seemed to be proper authorization checks when the URL contained one ID - just like the first one mentioned. However, there were quite a few URL of the format hxxps://app.minmemoria.no/#/patients/<some patient ID>/<some entity type>/<some entity ID>, and at least in some cases there was no check if the logged in user was allowed to access that entity ID.
For example I could read other persons' stories using the URL hxxps://app.minmemoria.no/#/patients/<a patient I had access to>/stories/<some other patient's story ID>.
This type of failing authentication check was the same for PUT and DELETE calls. So I was able to change other persons' stories and delete other persons' pictures. (As mentioned, I created several users and patients so I only accessed and altered contents between these accounts.)
The Curl command copied from Chrome for changing others' messages looked like this:
curl 'https://app.minmemoria.no/api/personas/<a patient I had access to>/events/<message ID>' \ -X PUT \ -H 'Origin: https://app.minmemoria.no' \ -H 'Accept-Encoding: gzip, deflate, br' \ -H 'x-request-id: <some UUID>' \ -H 'Accept-Language: nb-NO,nb;q=0.8,no;q=0.6,nn;q=0.4,en-US;q=0.2,en;q=0.2' \ -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36' \ -H 'Content-Type: application/json;charset=UTF-8' \ -H 'Accept: application/json, text/plain, */*' \ -H 'Referer: https://app.minmemoria.no/' \ -H 'Cookie: <session cookie++>' \ -H 'Connection: keep-alive' \ -H 'x-service-version: 1.0' \ --data-binary '{"articleBody":"My altered message"}' \ --compressed
The Curl command for deleting others' pictures looked like this:
curl 'https://app.minmemoria.no/api/personas/<a patient I had access to>/folders/<folder ID>/assets/<picture ID>' \ -X DELETE \ -H 'Origin: https://app.minmemoria.no' \ -H 'Accept-Encoding: gzip, deflate, br' \ -H 'x-request-id: <some UUID>' \ -H 'Accept-Language: nb-NO,nb;q=0.8,no;q=0.6,nn;q=0.4,en-US;q=0.2,en;q=0.2' \ -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36' \ -H 'Accept: application/json, text/plain, */*' \ -H 'Referer: https://app.minmemoria.no/' \ -H 'Cookie: <session cookie++>' \ -H 'Connection: keep-alive' \ -H 'x-service-version: 1.0' \ --compressed
I feel pretty sure there was more problems than these, but I had found more than enough to report.
The issues I saw while doing a quick test of the site:
Surely there were other issues here as well. I stopped checking for more when I found these.
At night, I sent an e-mail to their contact e-mail address.
Just after lunch I received an e-mail thanking for the discovery and telling that they've reported it to the developers.
I never received any more replies, so I don't know when they fixed it.
I sent a new e-mail asking what the status was.
I got an answer telling that they had fixed the issues.
Privacy of any social media platform is so important. It's so easy to create web sites today, but it's still hard to make them properly secure.
However, in this case there seems to be a big lack of understanding how to - and/or desire to - secure web apps. Memoria doesn't appear very concerned about security when they had issues like these. I wish they would show more respect for the care service users and their families. I hope they'll use some third party for security audits in the future.
]]>ikea.com stores logged-in users' names in a cookie that is sent unencrypted over http. They also had an XSS vulnerability that made it easy to get hold of the name.
Who: | IKEA |
Severity level: | Low to medium |
Reported: | August 2017 |
Reception and handling: | Poor |
Status: | Partially fixed |
Reward: | A thank you |
Issue: | Cookie with full name sent over HTTP and XSS to get hold of it. |
I have both ordered stuff online at ikea.com and also designed kitchen and walk-in warderobe using their planners. That means that I have an online account there.
I have always liked IKEA. Lots of cheap furnitures and other products, and mostly good value for the money. Even though they figured out warehouses a long time ago I have bad experiences with their online store. They didn't deliver the products on the estimated dates, they didn't deliver the right amount of stuff and they were unable to deliver the whole order, making me stuck with stuff I had to store and couldn't start to assemble.
I browsed ikea.com while having the browser development tools open.
Looking at the site's cookies I noticed that some of the values clearly was Base64 encoded. They store so many cookies that there might be some more hidden treasures, but I decoded a few and found that my full name and selected IKEA warehouse was stored in one of them. Also there was one without any encoding with my postal code and place.
Using a script like the following is a quick and dirty way of getting the cookie names and values (though some have parts of the values as BASE64 encoded):
document.cookie.split(';').forEach( function(cookie){ var parts = cookie.split('=',2); console.log( parts[0] + ' = ' + decodeURIComponent( function(parts){ try{ return parts[1] + ' / ' + atob(parts[1]); }catch(e){ return parts[1]; } }(parts))); } );
decodeURIComponent() is used for decoding the URL encoded values.
atob() is used for decoding the Base64 encoded values.
ikea.com is served both with and without SSL. The cookies containing name and location is not flagged with either HttpOnly or Secure. That means two things; your name is sent unencrypted over the Internet if you type in ikea.com in your browser, and the cookie can be stolen via XSS.
I didn't find any XSS vulnerabilities, but a quick Google search led me to https://www.openbugbounty.org/search/?search=ikea.com&type=host telling about several possibilities for XSS.
There's nothing like a proper "proof of concept" when reporting a bug or security issue, so I created a small JavaScript for fetching a user's name:
var index; var key = 'user_info_16='; index = document.cookie.indexOf(key); if (index === -1) { alert('You have not visited ikea.com, have you?'); } else { var temp = document.cookie.substring(index + key.length); temp = temp.substring(0, temp.indexOf(';')); if (temp === 'notloggedin') { alert('You are not logged in, are you?'); } else { temp = atob(temp); temp = decodeURIComponent(temp); temp = temp.split(';'); alert('Is your name ' + temp[0] + ' ' + temp[1] + '?'); } }
The script searches for the user_info_16 cookie, decodes the Base64 encoding and extracts the user's name and presents it in a dialog.
Now, browsers today often have often implemented some sort of XSS auditor to protect the user, so the above script didn't work out of the box with browsers built on later versions of Chromium. So to use this in an attack one would probably have to some browser-depended adjustments. Using an XSS Filter Evasion Cheat Sheet will probably always find a vulnerability for any browser brand and version.
The URL worked just fine in the latest version of Firefox, as seen in the screenshot.
The three issues in question were these:
I searched and searched to find some sort of contact point that wasn't the regular customer service phone. I failed. But I found a list of press contacts and tried the first one. I never got an answer.
I sent an e-mail to the second press contact with the first one on copy. I never got an answer.
I sent an e-mail to the third press contact with the two first ones on copy. I also mentioned that I was now going to give up.
I didn't hear back from any of the press contacts directly, but I did receive an e-mail from an "mCommerce Specialist" asking for details. I wrote back with the technical details. It took me 19 days to be able to give details about the issues.
I got a reply the same day saying that they were going to look into this. They also said that reports like this should go through their press center. It would be nice if they would write this somewhere on their website, and that the press contacts actually responded.
I asked if they were actually going to do something about this, if I should wait to write about these issues.
I got a longer good answer explaining more about the challenges and timeline. They are working on it, but it probably won't be fixed anytime soon.
As mentioned before, you should assume that anyone who wants to, knows everything about you and everything you do. Even when you think you are anonymous you might not be just that.
If you are concerned about privacy you want to both log out of sites and delete the cache.
I also hope that we'll soon see most websites switch to secure communications only and leave http behind.
]]>To try it on other pages, just drag the button to your bookmark row in your browser. After that you can just click the bookmark when visiting other sites.
A bookmarklet is a bookmark stored in a web browser that contains JavaScript commands that add new features to the browser. Bookmarklets can be useful tools, e.g. for increasing the readability of web pages, do searches, create short urls, etc.
DOM II: JavaScript Hell might not be very useful, but hopefully it's an enjoyable small game if you're bored or if you're disliking a web site. :)
I never forgot about the similar bookmarklet created by Erik Rothoff Andersson in 2010. I wanted to create something like that, but with my own code and my own twists and also have mobile support.
Oh, and the name? Since this game is all about DOM manipulation I figured that the name would be a nice play-on-words and tribute to Doom and more specifically Doom II: Hell on Earth.
As I write a bit about security I think it's natural to give you a word of warning when it comes to bookmarklets. My bookmarklet is safe to use, but you shouldn't take my word for it. You should never run bookmarklets on pages that have private information stored on it. Luckily, today online banks etc. uses a Content Security Policy that will stop bookmarklets to be run on their page. Otherwise one would risk e.g. financial or private information getting in the wrong hands.
Please leave a comment or send me an e-mail if you see any bugs or have have any suggestions for the game. My guess is that there will be quite a few mobile devices having the some odd values reported from the gyroscope.
]]>A Norwegian company with a centralized online project management tool had an SQL injection vulnerability for at least 10 years.
Who: | Anonymous, let's call them Acme3 |
Severity level: | Critical |
Reported: | Summer 2017 (and possibly 2007) |
Reception and handling: | Poor |
Status: | Who knows.. |
Reward: | A thank you |
Issue: | SQL injection affecting all customers |
In 2007 I was working for a company that started using a SaaS project tool and, more or less, a complete CRM. As a software developer I personally used it for time-tracking for the projects and customers I was working with.
It was a very poor tool for time-tracking (as most time-tracking tools are even today), but that was soon to be overshadowed when I noticed that the URLs contained SQL. Not only did the service leak data, it was possible to alter data. Not only was this possible within our own company, but this was across all of the service's customers.
I of course told about this to my boss. To emphasize the problem I changed my boss' name to be surrounded by the infamous <blink/> tag to make it constantly blink while he was logged in.
I also prepared an article for the great "software engineering disaster blog" The Daily WTF which I read daily back then. However, I changed jobs in 2007 and soon forgot all about the article, the security hole and Acme3.
Preparing my blog I looked back at some old issues I had screenshot and made notes from and of course found this one. Checking out their online demo I saw that they still had the SQL injection issue. 10 years later. Seeing the old screenshots that says "Copyright 2000" one can wonder how many their customers who have been affected by this.
You don't need very heavy measures to find any issues when you find out that the URLs of a site actually uses SQL. The URLs were "concealed" because of the site using frames. I had a look at their JavaScript and noticed that it built SQL queries which was used as URL parameters.
The URLs looked like this:
http://example.com/lookup.asp ?title=Employees&list=0&headers=Employee+Id;First+name;Last+name &select=SELECT+EMPLOYEE_ID,FIRST_NAME,LAST_NAME+FROM+EMPLOYEES &goURL=someother.asp&key=EmployeeID&projID=&where=&order=3&records=all
It can't really get much worse than this.
So from here one could change the query to e.g. include the password. It's hard to believe, but it does actually look like they have some salt in the password hashes. But that doesn't matter much as it was possible to run UPDATE statements using the URL.
My favorite changes I did was in the line of these:
http://example.com/lookup.asp?headers=version &select=UPDATE+EMPLOYEES+SET+FIRST_NAME='<blink></blink>' +WHERE+USER_ID='myboss';SELECT+@@VERSION,@@VERSION+AS+ver&order=ver
http://example.com/lookup.asp?headers=version &select=UPDATE+EMPLOYEES+SET+FIRST_NAME='John+"I+better+report+these+security+issues+to+Acme3+soon"+Boss' +WHERE+USER_ID='myboss';SELECT+@@VERSION,@@VERSION+AS+ver&order=ver
But what was even more worrying was that each company had a different database in the same database server, and it was possible to do queries across databases. I never tried altering data for other companies, but gaining read access is bad enough. The database user seemed to have access to all kinds of databases and system tables.
Depending on how a company was using the service it was possible for anyone to get access to the following information:
From the look of it was possible to alter any data as well.
The company was non-existing in social media etc. I never managed to find any e-mail addresses. But they had a contact form on their website which I used to tell them about this. The only problem was that the form doesn't work at all in some browsers and doesn't give any feedback if it's been successfully submitted in the rest of the browsers.
I got no response.
I tried the contact form once again just asking if the form was working at all. I never got an answer.
Suddenly one night there was someone online on a chat on their site. I filled in my name and asked if the contact form on their site worked. The guy just replied "We saw your "security" report". What? Why haven't they contacted me? He went on telling that the issue is fixed now. They "had a round of security this summer". Then he told me to say if I saw anything else, gave me a short "thank you" and finished "night".
I'm not sure if the conversation was directly unfriendly, but it sure wasn't friendly. And it makes me think that this isn't serious people. Though, at least now I knew the issue was reported and they claimed to have fixed it. I don't want to check.
This is a bit of a special case going back so many years. Did they receive any reports 10 years ago at all? It would be easier to name them if I was able to communicate with them. The not-so-friendly support chat gave me some bad vibes, and I haven't been able to find out much about the company or the people behind it.
From public financial information I see that they have had 1-2 million USD in annual revenue the last 10 years. As far as I can understand they don't have any other products, so they should have quite a few customers using this service.
From time to time there are news articles about industrial espionage. Companies like Acme3 sure makes it easy to those looking for data.
Are you working in one of the companies using this system? Maybe you should use an expert to take a quick look at the systems where your company stores information you don't want to be leaked or even altered.
Is it possible that an issue like this was existing for ten years without no one taking advantage of it? That's hard to believe.
]]>A fitness center chain consisting of three centers was leaking the members' names, e-mail addresses, phone numbers, pictures, bank account numbers, logs of all visits, etc. They are still running vulnerable server software.
Who: | Energi Treningssenter |
Severity level: | High |
Reported: | April 2017 |
Reception and handling: | Good |
Status: | Partially fixed |
Reward: | A thank you |
Issue: | All kinds of personal info was leaked |
Energi Treningssenter at Askรธy is an excellent fitness center. It's modern, big and has all the equipment you want. I used to train there for a while, so I had - and still have - access to the member site where you see your personal details, payment history and full log of your visits.
Some years ago I noticed that the picture taken for the key card to the gym was publicly available. Just knowing the URL you could iterate through the images of all the members without having to be logged in. I never bothered to report it back then. When I started considering this article series I remembered the issue with the pictures and wondered if my personal data was safe. The whole site had much of the feel as the completely vulnerable PHP site I wrote about earlier on.
I logged in to the gym's site while having the browser development tools open. I looked for anything out of the ordinary in the HTTP calls and in the source code.
The first thing that hit me is that all URLs are http and not https. Even when submitting the form with username and password there is no encryption.
I spotted three links to an ASP that was hidden with CSS: display: none;
The ASP looked like it let you change database fields for any user, but from failing to even change my own data I'm not sure what the deal was. However, this page was vulnerable for XSS. A good opportunity to steal the session cookie which seems to work perfectly fine across IP addresses. Or one could just let the user send his or her personal data directly.
The source also revealed the use of some kind of "add on" used for file upload. Is it possible to upload code that can be executed? I hope not. I wasn't able to make tell for sure, but there was indeed web forms for uploading all kinds of files.
There seems to be three different servers involved serving the site. Looking at the headers and the default error pages reveals outdated server software that have known vulnerabilities. However, as I've stated in the background for these posts, that is out of scope for now.
When you log in you will be told if the username you entered exists or not. If you use the "Forgot password" function you're told if you're entering a known e-mail address or not. And the e-mail you receive is not for resetting the password, it just contains both the username and the password. Fail x 3.
The page with all the personal details doesn't have any IDs or anything, but that doesn't mean that I couldn't try adding it. I tried account.asp?id=<some ID>, and voilร , I got access to other users' personal details. The ID was an incremental integer. Iterating the ID one could seemingly get everyone's name, e-mail address, phone number, bank account number, payment history and full visit log.
A lot of personal data of previous and current members was leaked:
In addition there are quite a few issues that's probably still making the customer data vulnerable:
I believe that these issues have been around for many years.
At night I sent an e-mail telling about the information leak and general concerns about the solution.
Just 1,5 hour later I got a reply telling that the issue was forwarded to the right body.
I received an e-mail telling that the vendor of the system had fixed the issue. I see that they have removed the issue with the information leak, but everything else still is the same.
This is yet another example of our personal data in the wild. There are countless security vulnerabilities out there. You should assume that anyone who wants to, knows everything about you and everything you do. And companies that have these types of vulnerabilities won't tell you when they become aware of them.
And to start connecting the dots between the cases I'm representing; do you remember two weeks ago where you could see the bank account balance using just the bank account number? Well, wasn't it nice that this case gave you that bank account number?
Looking at the old versions of the software running on these sites I would definitely guess that the data is still vulnerable.
]]>A campaign web site from Norway's leading supplier of branded goods, where one could upload images - typically of your kid, and typically including their first name and year of birth - was, and still is, making a small 90 x 90 pixels image publicly available. It was possible to systematically retrieve the data.
Who: | Orkla Food's campaign by Japan Photo |
Severity level: | Low |
Reported: | August 2017 |
Reception and handling: | Good |
Status: | Partially fixed |
Reward: | A thank you |
Issue: | A small version of the uploaded picture and often the given name and year of birth of the person in the picture is available publicly available |
Stabburet Leverpostei is a kind of liver pรขtรฉ that has been part of the diet for many Norwegians for generations. They have had a pretty iconic can with a picture of a kid on the front. At first they had the same kid from 1955 to 1970, but in more recent times they have been using the front as more of a marketing opportunity with competitions, campaigns and more a frequent change and use of several different faces.
In August 2017 I saw a TV commercial telling that if you bought 3 cans of Stabburet Leverpostei you could upload your own picture and receive your own cover that you can use at home.
Of course this made me wonder if the images were securly stored.
I went through the wizard for uploading images, getting the lid and buying other products at the same time as having Chrome DevTools open. I looked for anything out of the ordinary and of course tried out different URLs with different IDs and input.
What I saw was that every image uploaded got a UUID which was used when refering to the image in the different web pages.
They also had this share function where you share the lid that you had created to different social media. What this did was just refer to this UUID at some URL.
The sensible thing would be to make the image publicly available at some URL the moment the customer chose to share the image. As long as it is public, one should expect the image to be accessible to anyone knowing or guessing the URL.
The first problem was - and still is - that all images - shared or not - are publicly available if you just know the URL. For me this looks like quite a trend. Developers often assume that because a URL is hard to guess it should be considered private. This spring we had some media coverage in Norway on how a change in the browser Microsoft Edge made Bing index a lot of URLs like these (Norwegian text). Yes, the URLs might be hard to guess, but the problem is that the URL will always be valid, it will always be public, and you don't know who's accessing it.
Going through the checkout process I noticed that the URL for the final receipt - http://www.stabburetleverpostei.no/takk-for-din-lokkbestilling/ - included the mentioned UUID. And, what's more, the URL redirecting to that URL had the format http://kampanje.stabburetleverpostei.no/checkout/finished/<some auto incremental ID>. Iterating the ID made it possible to collect the UUIDs from seemingly all the orders (I only tried a few).
E.g., going to
http://kampanje.stabburetleverpostei.no/checkout/finished/2095270
would redirect to
/takk-for-din-lokkbestilling/?lokkid=0498600376a123f1530f1fed7083b350
which meant that the image could be seen at
/bestill/streamthumb/0498600376a123f1530f1fed7083b350.
I found two issues in the campaign web site. One is now fixed, but the other persists:
The fact that all images are publicly available is not mentioned in the terms of this campaign.
At night I sent an e-mail to the contact address (for Eurofoto (owned by Japan Photo)) telling them about by my findings.
Just before midnight I received an e-mail telling me that they have stopped adding the image UUID to the URL of the "thank you" page. That's a very impressive response time. It does, however, seem like all images still are publicly available.
On one hand one could blow this up really big; a lot of pictures of kids with what's probably often their real given name and their year of birth. But, let's be real; in this case we are talking about small images; about 90 x 90 pixels are of the person itself (and then the rest is the rest of the can with the name, year and stuff).
Now, this is speculation, but I wouldn't be surprised if the full size images are available on some other public URL. However, I did not find that. And looking at the image data being uploaded we're looking at a image size (of the person) as small as about 220 x 220 pixels. That is still a pretty low resolution.
Also, there is no connection between the images and data like full name or location.
You - as a consumer - should always assume that whatever kind of images or information that you are uploading or sending to some third party can end up in either the wrong hands or be publicly available.
All you developers: Please don't think that UUIDs makes data private. You still need authentication and authorization; and you still need to check that it's actually working.
If companies choose to store images like in this case, they should indeed mention that in the terms of the site. That is not the case here. I also wish they would mention for how long they are storing the images.
]]>I've uploaded all the HTML files from a Norwegian Internet Service Provider's Internet demo diskette so you can (almost) surf like it's 1995.
I didn't really know what the World Wide Web was about. The diskette contained the web browser Mosaic, and it actually gave a feel of what the Internet was like.
Below is the demo itself. It's nice to see the HTML pages still working the same 22 years later. You can also open demo in full screen.
When I tried the demo disk I remember being curious about how those pages worked. I found the source code and from there I wrote my first lines of HTML. It was pretty cool to see how the markup was rendered in the browser from the floppy disk. Little did I know I would code web pages for a living 10 years later. :)
The Norwegian bank Skandiabanken leaked the balance of other customers' bank accounts. I also question parts of their session handling.
Who: | Skandiabanken |
Severity level: | High |
Reported: | September 2017 |
Reception and handling: | Very good |
Status: | Fixed |
Reward: | A big thank you |
Issue: | Information leak with other customers' bank account balances and account names |
Skandiabanken - soon to be called Sbanken - is a fairly large bank in Norway with its more than 400,000 customers. It was Norway's first pure online bank when it started in 2000. I have been a customer since that first time and all along from the start it's been my favourite bank.
This summer a regulation for personal savings accounts for shares was approved. From September 1st 2017 it was possible to move shares and funds into this new type of account. The timing meant that all banks in Norway suddenly were in a hurry for getting the product ready.
The morning of the opening of the new account type I was logged in to create one for myself. I noticed that there was a few missing text translations and some places where it said undefined in the user interface. This new part of the bank wasn't all bug free yet.
I opened Vivaldi developer tools when logged in, to see what was going on in regards of network calls. I was surprised to see that one of the presumably new Ajax calls contained one of my bank account numbers. I could be wrong, but I think it's atypical for them to use that ID when asking for data from the backend. That of course doesn't mean anything, but I got curious and wondered if my data was properly secured.
The Ajax call returned JSON with the balance and some other data about the bank account in question. I asked a friend for a bank account number and permission to check if I could get any of his data returned. And indeed I got his data.
Knowing just the bank account number of another customer one would get these data:
I notified the bank and they immediately responded and started checking out the issue.
Just hours later they had rolled out a fix for the problem. This must be the quickest fix I have ever seen for a security issue.
Later the same I day I was phoned up by one of the chiefs who thanked me and told me they were grateful for that I found and reported the issue.
I had left my browser logged into this new part of the bank called "Min sparing" ("My savings"). When I returned to the computer quite a bit later I noticed that I was still logged in. And I noticed that I could close and open my browser and still be logged in to this part of the bank. Going to other parts of the bank would log me out from everything.
I reported this by e-mail, but just after that I learned that this part of the bank has 9 hours session time and not 20 minutes as most parts of the bank. I felt a bit embarrassed for reporting a non-issue and wasting their time.
The next morning I realized something. Though this "My savings" session time was intentionally high, Skandiabanken offers simultaneous logins, and logging out from one session doesn't invalidate any others. This means that if you are able to get access to a computer where the user forgot to log out after accessing "My savings" in the last hours you can get hold of the cookies and keep the session alive by only calling the server once in a while.
What's more is that you can do this call from any location. You don't have to use the same computer or IP address. The "My savings" page gives a pretty good glance into your economy (like shares, funds and some transactions), and using the mentioned Ajax call you can could also use the same cookie to access the balance of other known account numbers for that logged in user. Hopefully the session can't be kept alive forever without signing in again. While testing I had this one session alive for more than 36 hours (while changing locations and having other devices logged in and out).
Skandiabanken replied and told me that this session handling is a feature and not a bug. They want a long session time, and they don't want to restrict the session to IP addresses because of mobile clients.
Skandiabanken seems to have removed the bank account number from the Ajax call, making it always return the balance of the payment account for funds and making the "My savings" page only getting savings related data. I would say that's a step in right direction. The 9 hours session time seems to stay the same.
As far as I know the security hole with balance access was introduced that morning and was only in the wild less than a day. I have worked with online banking as an IT consultant and know how seriously security is taken in that industry. I was pleased - but not surprised - to see how seriously and professionally Skandiabanken handled everything.
I feel confident that this issue would've been discovered relatively quickly by the bank itself hadn't I reported it. However, for me online banking is one of those services that just need to always be secure and never leak information like this.
What makes me a bit uneasy is the session handling feature/issue where someone could be watching my economy with a logged in session that I'm unaware of.
Please remember to always hit that Log out button.
]]>My first post on this blog was Building almost a real SPA blog. While the title mostly is still true, I ended up having a bit more .htaccess config and a small Python script for previews of posts in social media and add support for web feed.
I was able to build this blog as a single-page application. I had - and still have - all contents in one single static JavaScript array and use the JavaScript framework Knockout and the front-end framework Materialize. So far so good.
Google has since 2014 been able to render and understand JavaScript web applications. Using the Search Console's Fetch as Google I have been able to verify that Google understands my site just perfect. Doing a site: search also confirms that.
I have to admit that I don't care too much about other search engines like Bing, Yahoo, etc. Google is in its own class when it comes to searching. However, because of the mentioned Python script I now have a tool for that as well. More about that in a second.
I sometimes post my articles to Twitter, Facebook and LinkedIn. When creating a preview of the links those sites certainly does not try to parse any of the contents as a modern browser. They just look for Twitter Cards or Open Graph Protocol (OGP) meta tags. If they don't find anything they typically default to the <title/> tag.
My index.html only has a standard <title/> tag which makes no sense in the context of my articles. That makes the previews pretty bad, so I mostly skipped them in the start. However, I think the content is so much better with a good preview of the contents. I'm sure it also makes more people click the links.
Naturally one doesn't get any web feed like Atom or RSS out of the box when building a single-page application. And that's also one of the things I wanted to have for my site.
So far I've added support for Atom which you can acess at https://blog.roysolberg.com/atom.
I couldn't find a way to solve the preview challenge without resorting to some backend code. I really like the programming language Python. Writing just a few lines of code I was able to create a script that read the contents in the JavaScript array and produced the necessary OGP tags. You can see the source code of the script on GitHub.
Now that I had the script for generating the preview I needed a way to route the bots to it. For that I used the already existing .htaccess configuration file and look for the User-Agent header belonging to the different sites' bots. You can see the source code for .htaccess on GitHub.
Because of the preview setup I got support for other non-Google search engines for free. They access the same code as generated by the Python script. Google is still served the same site as you are.
Having the preview script it didn't take long to make support for the Atom feed. You can see the source code of the script on GitHub.
I don't think the Open Graph Protocol is very flexible or ready for single-page applications and thicker clients.
The first thing that hit me when creating the script was that I couldn't just inject the meta data as HTTP headers. I had to create HTML markup for it. I can't understand why one shouldn't be able to choose between headers and tags.
Secondly the different social media sites need to start doing what Google has been doing since 2014; render the pages with some kind of headless browser to be able to understand the contents and get dynamically injected OGP meta tags. It's not that much magic or resource demanding in 2017.
If you're interested you can have a look at the source code for this blog at https://github.com/roys/js-web-blog. The project itself is licenced under the MIT License, but for the contents (posts and images) I reserve all rights.
]]>A small Norwegian company is using a completely broken and open amateur PHP CMS. The site is open for (at least) local file inclusion and it's possible to completely take over the CMS.
Who: | Anonymous, let's call them Acme2 |
Severity level: | Critical |
Reported: | August 2017 |
Reception and handling: | Poor |
Status: | Not fixed |
Reward: | Not even a thank you |
Issue: | Completely broken PHP web site |
I actually can't remember why I landed on the web site. Someone asked me to read something on this or a linking web site. Having seen a lot of bad sites over the years you can sometimes tell just from looking at the front page that you'll find some issues there.
The site is for a very small Norwegian company doing some kind of training programs for other companies. It has some public facing login for editing the content inline and is written in the server-side scripting language PHP. There was a time when PHP was very popular, but I guess those days over.
Once in a while I take a quick peek at web sites' source code. I was expecting to find something on this particular one, but I didn't expect to find it that quickly:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <title>[name of company]</title> ... <link rel="stylesheet" type="text/css" href="merge.php?css[]=%2Fcss%2Fstandard.css&css[]=%2Fcss%2Fcustom.css" /> ... </head> ... </html>
Can you spot the opening? Obviously the CSS was produced by some PHP script combining separate CSS files into one. Could it be that it would work for other files and directories?
Well, first I had to find the name of some other PHP files. I didn't see any, so I logically guessed on index.php: http://example.com/merge.php?css[]=index.php
But then it responed something like /* File is not a CSS or JavaScript file. */
Good, maybe there was some sort of security in place after all?
As you might know, PHP is built mainly using the programming language C. And C uses null bytes to terminate strings. So the next natural step is to try a URL like http://example.com/merge.php?css[]=index.php%00.css
where the script will still check for the .css file ending, but when using the input string to include a underlying file it'll only use the part up to the null byte (URL encoded as %00).
This was bulls eye. I got the source of the index.php file back. At this point you know that you are golden.
Normally I would just provide a URL like http://example.com/merge.php?css[]=../../../../../etc/passwd%00.css
just to prove the point to the site owner. Not that /etc/passwd contains any passwords any longer, but it will typically reveal usernames and shows that one can retrieve almost any file from the file system. However, this was a Windows machine and the index.php was not containing anything fun in itself.
index.php did however point me towards other scripts files and moments later I got hold of this PHP file that actually defined the username and hashed version of the password of the admin user. Given the state of the site I'd be surprised if there even was some random salt per user. So what you do you do when you have a hash? Well, you search for a online rainbow table site.
I got an immediate hit for the hash. Not surprisingly the hash function used was the no longer recommended SHA-1 and the password itself was so short and simple that you probably won't find it on lists of the most common passwords. Using the username and password you'd get full access to the content management system for the site.
At this point I stopped looking any further. I had enough info to report the issue and give a sense of the severity of what I saw.
In my eagerness to find the security holes in the web app (and due to lacking branding) I didn't notice that this was actually not a homemade PHP site like I first thought. I didn't see it until writing this post, but there was a very interesting HTML meta tag:
While I couldn't easily find the full history, it seems like gpEasy CMS is an old version of Typesetter CMS. I couldn't find the file inclusion among the issues on CVE details, though there was another site with quite a few issues listed.
This is a completely broken PHP CMS. It's possible to retrieve about any file on the server, and you can change any contents of the site.
This is only speculation as I didn't look further after finding the file inclusion issue, but when you find a site like this you most probably also will find other types of holes like remote code injection.
I sent an e-mail to the contact address. 8 minutes later I got a response. That's a new record. But the answer wasn't that uplifting. Apparently the site hadn't been of much in use the last years (though I would definitely guessed otherwise looking at the public accounting and taxation figures..). It didn't seem to be of any interest doing anything with the site or server. So hopefully there isn't anything of value there and hopefully no one makes the server into some kind of zombie.
This is a small company, possibly not in any real operation. They haven't fixed - and probably won't fix - the issue. I don't think it's right naming them. Let's just hope no one else finds the site and that they don't have any important or customer data on the server.
This was a blast from the past for me. It's been quite some time since I last saw an amateur PHP site like this. PHP once was very hot as the scripting language, partly because it is very easy to get started with, but it had a lot of pitfalls and it was so easy to make a site insecure. As the language matured the usage has gone down. Today most issues I see are HTTP service endpoints with lacking authorization checks.
]]>It isn't that the home automation system HDL-BUS Pro has any security holes; it doesn't have any security. If your house, the hotel you're staying on or your business uses HDL you should definitely read on.
This spring I moved into my new house. When building a house in 2014/2015 you kind of feel obligated to make it a bit smart. Being a programmer it makes it a must. I looked into quite a few systems and protocols for home automation. Since this is a new building I preferred a cabled system instead of a wireless one. The electrical contractor for the house wasn't much updated on smart homes, but luckily they had a few electricians which knew and installs HDL-BUS Pro systems. So a bit coincidentally I ended up with HDL.
Long before the actual installation I went to a training for "programming" (configuring really) the system. I was very curious about the underlying protocol and how stuff worked under the hood. Luckily HDL is open about its buspro protocol - and that's a healthy sign - and I learned about and was given the specification for the internal communication between the components. This was when I first was a bit surprised about the lack of security. It's a straight forward simple protocol - and that's a good thing - but it completely lacks encryption, authentication and authorization.
HDL has a component called IP gateway which is a gateway between ethernet and the wired HDL components. The IP gateway is necessary to configure the components through the Windows application called HDL-BUS Pro Setup Tool. It also supports remote configuration from anywhere on the Internet.
If you have an IP gateway connected to your ethernet you want to make that a network that isn't reachable for unauthorized parties - meaning that both wired and wireless network shouldn't be available for anyone you don't trust. My neighbour was over the other day and casually asked "What's the password for the Wi-Fi?" Of course, I run the guest Wi-Fi in my house on a separate network so I could give him access. However, I suspect that most people (or businesses) with HDL don't realize the dangers and let anyone access the same network. If you want your IP gateway to be available via your Wi-Fi you want to make sure that the encryption, password and security in general is at a high level.
Very much like the precaution #1 regarding Wi-Fi and cabled ethernet, you should think twice if you have your ethernet available over your powerlines. What about that power outlet you have outside your house or just inside the garage?
With so many "trusted" devices connected to your Wi-Fi chances are that the security in or more of them have been comprimised. A typical home Wi-Fi for a family have several phones, tablets, laptops, TVs, and a video game console connected. Also with Internet of Things on the rise more and more units are allowed on your local network. If only one of those are compromised, someone could theoretically get access to your smart home. Considering precaution #1-3 you probably shouldn't have the IP gateway connected to the ethernet at all.
Do you have any outdoor sensors for e.g. temperature or motion connected to your system? Well, I don't think you should. What happens if someone hooks up an IP gateway and a computer on that unit or the unit's wires? Correct, they have full access to your system.
Being on a ethernet with a HDL system and recent version of the IP gateway's firmware lets you enable remote access. So, have you possibly had any unwelcome guests connected to your local network at some point? Have you checked if someone has enabled remote access to your system? Or maybe they just fetched the IP address, username and password from the IP gateway. Either way someone could access your system from remote at any desired time later on. My advice is to have the remote connection disabled.
If you have ever accessed your HDL system from remote through the IP gateway you should consider changing the login info and/or disable the remote access. As mentioned, HDL doesn't have any encryption, meaning that nearly anyone could possibly have picked up your login info when connecting through the Internet.
HDL has an SMS gateway that lets you text commands to the HDL system. Typically a set of phone numbers are whitelisted for sending commands. Commands can be something like "VACATION", "ALARM OFF", "OPEN GARAGE". It is very easy to spoof a phone number when sending a text. If someone knows - or guesses - the phone number you send commands from, so can they. If someone has/had access to the SMS gateway that someone could know the commands and even set up other commands.
So, what's the problem with having anyone connected to your HDL system either remotely or locally? Well, what if someone reads the status of the motion sensors? Then it could be possible to know if there's anybody home, maybe they could even make educated guesses about who's home depending on which areas that are in use. You don't post a sign outside your home telling potenial burglars that you aren't home, so you shouldn't let your smart home do that either.
Okay, somebody knows that noone's home, but you're protected by your smart home aren't you? Motion detectors, alarm sound, blinking lights, SMS warnings on intrusion. If someone has access to your HDL system they can easily turn this off. They could even turn it off, break in, turn the alarm system back on after leaving, and you wouldn't have a clue what happened.
If you have smoke detectors connected to the system any communication with the HDL system can be disabled.
Got your garage door connected to the system? Or even your front door? Well, you've probably figured it out by now. The doors can be opened (after disabling any alarms).
Someone could connect to your system and do vandalism like turning the heat on for full or control the blinds. Some things might be considered just a prank, but what if someone pushes the dimmers, relays and heating to the edge by either turning them on and off quickly or turning them to a 100%? Would it do damage to the components? Cause a fire?
Those previous five scenarios were the ones on top of my head. I'm sure you can think of a sixth and endless more yourself.
This isn't some zero-day vulnerability disclosure of HDL-BUS pro. The system is working as intended. These are just my observations, worries and security tips when dealing with HDL. Make your local network secure, consider not having an IP gateway connected, make sure wires and components aren't accessible for anyone who shouldn't have access. I wish they taught this on the HDL training.
For the ones of you trusting on your local network security I want to quote a great book I'm reading now - Abusing the Internet of Things" by Nitesh Dhanjani: "As we add additional IoT devices to our homes, the reliance on WiFi security becomes a hard sell. Given the impact to our physical privacy and safety, it's difficult to stand by the argument that all bets are off once a single device (computer or IoT device) is compromised. Homes in developed countries are bound to have dozens of remotely controllable IoT devices. The single point of failure can't be the WiFi password. What's more, a compromised computer or device will already have access to the network, so a remote attacker does not need the WiFi password."
]]>Digipost - one of two "digital mailboxes" in Norway where you can get mail from public authorities - leaked users' full real name, IP addresses and login timestamps.
Who: | Norwegian postal service's Digipost |
Severity level: | Medium |
Reported: | May 2017 |
Reception and handling: | Very good |
Status: | Fixed |
Reward: | 125 USD worth of gift certificates |
Issue: | Information leak with users' name and IP address |
In Norway we have two official suppliers for a "digital mailbox" where Norwegian public agencies can send you letters and documents. It's considered a more secure way than regular mail for delivering important letters and documents. The mailboxes have been pushed pretty hard the last couple of years, ensuring that as many as possible will sign up for it.
I've personally used Digipost for quite a few years now. I think it's a great service for receiving important documents. One day I was wondering if my information and documents were safe with them.
Earlier on I used to attend the very good Java conference JavaZone every year. Five years ago I was at a talk from a couple of consultants working with Digipost, called Hypermediadrevet API i praksis (Hypermedia driven API in practice). It was an inspiring talk which made me make at least the next REST API hypermedia driven.
Little did I know that I would use this "Hypermedia as the Engine of Application State" (HATEOAS) to an advantage when looking for security issues at the same site years later. Simply explained, the HATEOAS makes REST APIs more self-documenting and easier to browse through using the links provided.
I logged in to Digipost having Chrome DevTools open to see what was going on. I opened one or more of the REST URLs in a different tab. Having a JSON viewer browser extension like JSON Formatter (that hopefully doesn't ship everything it sees to a third party server) makes looking at JSON a pleasure. The API being hypermedia driven meant that you get instant linkification and can browse through the data quickly.
Digipost seemingly uses an auto increment integer as ID for the user, making it easy to check if your data is secured against the access of others. (Remind me to write a post about the pros and cons against IDs like that (and no, security by obscurity does not make your data safe)). I changed a few IDs here and there and mostly got different kinds of error messages thrown back at me. However, I quickly found two URLs that didn't seem to have a proper authorization check.
Two URLs without proper authorization checks was found.
The first URL apparently returned the number of unpaid invoices you have. Not something you would care about if someone accessed.
The second one was the alarming one for me. It returned the account activities for the past 30 days. The data had the following elements:
The information exposed isn't sensitive, so why should you care? Well, I think there are two important points here.
The first point is that a service like this - promoted and pushed by the government - should have zero tolerance on any kind of information leak. As system developers we make mistakes all the time. Every week we go to work and create new bugs. Hopefully they aren't security related, but sometimes they are. When working with services like this it's so incredibly important to have proper methods and routines to minimize the chance that this can't happen.
The second point is that I think the combination of a full real name and a fresh IP address is unfortunate. It doesn't really matter for me, and probably not for you, but what about public figures or persons with unlisted addresses?
At night I sent an e-mail to the customer service.
Less than 48 hours afterwards I got an e-mail apologizing that I didn't receive an answer before and telling that they had fixed the issue and were going to deploy it the same day.
I received a reward of some gift cards which I appreciated, but what might've impressed me the most was that the chief of security actually took the time to add hand written thank you note.
I think the issue in question was handled very well. Digipost responded quickly, fixed it quickly, and communicated in a clear and professional way. Even when reaching out telling about this post they responded in the same manner.
It's important to underline that I never got access to any documents, communication or any information regarded as sensitive. But for me this is a type of service that should have the same level of security as a bank. There just shouldn't be any information leaks. I truly believe that the information leaked could've be used for bad purposes.
]]>The service uses the industry de facto standard for high security in Norway - BankID - for authentication, but still missed authorization check on a HTTP PUT call. A classical weakness to be found in web apps of today.
Who: | Anonymous, let's call them Acme |
Severity level: | Very low |
Reported: | March 2017 |
Reception and handling: | Very good |
Status: | Fixed |
Reward: | 1 x Flax scratch ticket (โ3 USD (1:600,000 chance to win 125,000.- USD (but no, I didn't win anything))) |
Issue: | Missing authorization on REST endpoint + still authenticated after browser timing out session |
Acme is a "service booklet" for your home. It's a place to store all of the documentation about who's done what and your proof that all is ok if you were to sell your home. Everything about my house is stored at their servers.
I got a "snap" of 5 scratch tickets from a friend that he got from Acme for reporting some problem with him getting the wrong house number. I thought maybe there could be an easy way for me to get some scratch tickets as well.
This was a quick one I did while grabbing some lunch one Saturday.
I logged in to Acme using BankID and having Chrome DevTools open. I surfed back and forth and got an impression of the web app and got a list of URLs that was being called + headers and cookies and whatnot.
As with most web apps today there were a lot of Ajax calls going on, transferring JSON. I then tried replacing some IDs in URLs. In general stuff seemed pretty good. The security seemed to be in place.
Then I saw this sort of "task list" where you have these set tasks - and can create new ones - for stuff you need to do with your home. You can also share them with third parties so they can do them for your and sign of on the work done. Once finished can set the task to status Done.
Most developer tools for browsers let you copy any HTTP request as a Curl command. I got an ID of one of the tasks of my friend and his approval to change the status of it.
I've changed URLs and IDs, but otherwise it was this command used:
curl 'https://example.com/UpdateTask' \ -H 'Cookie: <some cookies for BankID, session and Google Analytics>' \ -H 'Origin: https://example.com' \ -H 'Accept-Encoding: gzip, deflate, br' \ -H 'Accept-Language: nb-NO,nb;q=0.8,no;q=0.6,nn;q=0.4,en-US;q=0.2,en;q=0.2' \ -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36' \ -H 'Content-Type: application/json;charset=UTF-8' \ -H 'Accept: application/json, text/plain, */*' \ -H 'Referer: https://example.com/app/index.html' \ -H 'X-Requested-With: XMLHttpRequest' \ -H 'Connection: keep-alive' \ --data-binary '{"taskId":<some other personโs task ID>,"status":true}' \ --compressed
And bingo, the server returned HTTP 200 and the task was changed.
Returning to my computer some time after doing this, the browser told me I got logged out because of the session being timed out. I tried one of the Curl commands one more time and saw that the HTTP request went through and returned HTTP 200. Apparently I was still logged in and had a valid session even though my browser told me otherwise.
I think this is one of the more common issues in web apps today. When authorization of GETting data is in place, one have a tendency not to check if the client is allowed to do what he's telling the server when PUTting data back.
I sent an e-mail telling about the two minor issues.
I got a long and well-written reply thanking me for finding the issue and telling that the developers were looking at it. They also went on telling that they were growing and there was a new employee coming in who would have the service's security as its responsiblity.
I got an update. They had found the authorization issue and were in the process of fixing it. In regards of the session still being valid they said it was just that the sever had one hour session timeout while the JavaScript app had it set to 20 minutes.
I contacted Acme telling them that I was posting this. Until then they were very professional and open, but suddenly they became a bit defensive. And they wanted to "inform me" that I had broken their terms the moment I had checked if they had any security holes. They did not want me to mention their company name or brand. They were afraid of the media. I can fully understand that. Who doesn't want to protect their brand?
I think they have handled the situation so well, and the issue was so minor, that I want to encourage them to come forward.
This was a minor issue, but an issue I see quite a bit. The reception and handling was good, at least until I told I was publishing this. Developers: Go and check those POST and PUT requests and double check that you verify if the client is allowed to do what it wants to.
]]>While Norway's version of the Social Security number (SSN) isn't considered sensitive personal information, it can still be used for ID theft and is sometimes treated as an authenticator and not only used for identification. Knowing (or systematically picking) a car's number plate you can get quite a bit of personal information about the owners. Also, services hosted alongside the one in question seem to have dubious security.
Who: | Tryg's mobile app and Infotorg's web services |
Severity level: | Low to medium |
Reported: | February 2017 |
Reception and handling: | Very poor |
Status: | |
Issue: | A lot of personal info available for anyone who wants it |
The way to get the name (typically to look up the phone number to make contact) of a car owner in Norway has typically been to just send an SMS that costs some cents. A friend and former colleague told me about the insurance company Tryg's app Trygg pรฅ reise (Safe travel) where you can look up this information for free (plus Google broke their SMS app Hangouts making it impossible to send SMSes to 4 digit phone numbers). I had used it for quite a long time when one day I was curious about where it got its data from, if it maybe was possible to create some services on top of that data.
If you want to intercept traffic between a server and a mobile app (even SSL), the HTTP proxy Charles (and Android 6 or below for SSL) is the the way to go. It's very easy to use and gives a very good overview of the data going back and forth. And it let's you easily copy the HTTP requests as Curl commands.
Within a couple of minutes you have a pretty decent control of the HTTP calls for that app.
The first surprise I got was that the app and server actually use the procotol SOAP which is just terrible to work with. (I suspect people having to use, develop and debug SOAP services live in a fog of
#FML.)
The second surprise was that the web service actually sent back much more info than what it display in the user interface in the app, and not only the municipality that you also get with the amentioned SMS service. I've summarized all the details further down after all the technical details here. But seeing both the owner and co-owner's SSNs and addresses surprised me the most.
I noticed that the web service call always included the username and password instead of the returned session id. No biggie, but a bit strange. The strings for usernames and password were all upper case and only 6-7 characters long. The password was almost the same as the username and all of them containing the name of the customer and the abbreviation of the service name. I hope that isn't the standard, that it gives me a clue on how the logins for the other services are.
The data returned is returned as intended, so there's so information leak in itself. The web service works as it should. However, it's more questionable if it's okay that a service like to be facing a public client.
The data returned from the service is as follows:
Is this okay? All the app does is show the name of the owner and details about the vehicle itself.
Apparently the name of the co-owner and previous owner is public information according to the law called Offentlighetsloven (Freedom of information act) (Norwegian link only).
Infotorg provides quite a few services. (Norwegian link only, sorry.) Having the URL for the web service I of course checked out what else was on the server. I was a bit surprised to see that the root site had a seemingly complete list of all the services available at Infotorg. There were links to the documentation and WSDLs (Web Services Description Language) telling about all these services available and how to connect to them and use them. And these services do indeed contain much more sensitive information. There is a national population register, financial information, credit assement, employee register, lay judge register, etc. etc. It's important to note that I never had (or tried to get) access to these other services. My point is that the openess is a bit too much, and seemingly the user credentials policy isn't very strict. But this is just speculation.
To add to the eerie feeling about these services there are links to some test site and test CMS and information about a test client. Google has of course indexed all these other sites and sub domains. Also there are pages giving errors that gives more information about infrastructure and services running.
I sent an e-mail to the contact e-mail address provided by Tryg at the app's Play Store page. I never got an answer.
I also used a web form to get in touch with Infotorg.
I got an answer from Infotorg in less than 24 hours. That's prompt, that's good. And they wanted more details.
When I provided more details with an example Curl command for them to try I got an automatic e-mail back telling that the person handling this was unavailable. I never heard back after that, so I tried again one month later and this time including a e-mail address from the automatic e-mail that was supposed to be used for urgent cases. I never heard back. So I tried again writing both e-mails again five days before publishing this. I never heard back.
I'm publishing this post. So is this responsible disclosure? Yes, I tried hard to get an answer. But on the other hand, it seems to me that the involved parties don't think that this is a disclosure to begin with, and that it isn't a problem.
Tryg's user at Infotorg's service got closed (as far as I understand, after Tryg contacted Infotorg).
Tryg reached out to me. They thanked for the help finding the issue, said they were sorry for it being there in the first place, and told me it had been resolved.
digi.no published the article Norsk mobilapp รฅpnet for tapping av masse informasjon om norske bileiere.
Tryg commented on this post here themselves.
I think Tryg - when the information finally reached them - has handled the case very well. They reacted promptly, fixed the problem, and has been very open and honest about everything. I'm really happy with that.
His Majesty The King has got a few cars. Looking up e.g. the one with licence plate A-1 you'll see that now the car is registered with The Royal Court, but it used to be registered directly on our previous King - Olav V. They have also trusted the insurance company If since 1995.
The summer 2017 Norwegian Public Roads Administration opened for paying to get your own personal licence plate. It's been quite a few news articles about people getting different funny and fascinating plates. The web service in question works for those as well. Maybe something to think about before sticking your head out there.
Reporting this issue I got a question back for more details. There's no better way to understand a security issue than seeing your own data. The person who responded had got this fully closed private Facebook profile. Or, did he? Well, he had one single public post; a check-in. The check-in was from when he got a free car wash from a big radio show in Norway. In that post there was a picture of the car in the car wash. So he got a pretty low profile on the Internet, but still one could look up the name, address, SSN, etc. Doesn't that hurt just a little bit?
We should probably not fear for our SSN. But I'm still not sure if I like the idea that just based on a licence plate anyone should get your full address or know any details about your insurances.
Further I hope all of Infotorg's more sensitive services are much more secure than first impression I got; that they are alerted if anyone tries any brute force attacks or systematic information gathering, and that the logins don't consist of only a few capital letters.
Also, I wish that it wasn't so hard to get the attention when trying to report a security concern...
I call this "far fetched", because it's hard to believe it would happen, but I can't help thinking it.
We know from media the later years that governments from different countries do collect quite a bit of intelligence and information about people. Wouldn't be interesting for some states to get a catalogue of a big part of Norway's population? I mean, they get a real one-to-one identifier and full names and quite a bit of meta information. Combining this information over time with information from other sources? Observering a dataset like discussed here over time, one can get a sense of family relations, split-ups, address changes, income changes, etc. Is that okay? What if you at some point shared an address with a person that has got an entry ban in a country you want to travel to? Should they stop you too just to be sure?
What about insurance companies? They could in theory use the dataset to target potential customers. If they know that they beat the prices of one other particular insurance company they could make contact and try to sell their product instead. But then again, they have probably always had full access to these data.
If you use your imagination I'm sure you can come up with other ways to (ab)use the data.
]]>Over the years I've discovered so many security holes and information leaks on the Internet. Earlier I've only notified the involved parties, but I think it's time to go public doing "responsible disclosure".
Working with preparing these posts I've asked myself repeatedly if I should go public with my findings or not.
I'm still not entirely sure what the right answer is. What I do know is that I want increased focus on web security and that I feel a social responsibility to do so.
The purpose of posting these vulnerabilities is fivefold:
Hopefully the issues presented on this site can be a small part of getting some kind of discussion on how to deal with computer security and personal data.
While looking for security vulnerabilities I have followed a few simple rules.
The levels of the sensitivity in the information leaks I found vary. They go all the way from "Nah, I don't really care" to "0hly shit, this is not cool". But I think they all represent some unique points in respect of vulnerabilities and in respect of type of personal information.
I'm all for responsible disclosure and have immediately reported my findings. Generally I'm not publishing any details the problem has been confirmed fixed. However, sadly, in some cases there's just no interest or response from the other party.
If you want more thoughts about responsible disclosure I would recommend reading Troy Hunt's site (and maybe especially the video in that link).
]]>This blog is created with Knockout and Materialize, and have all blog posts in a single JavaScript array. All static.
I wanted a blog. A plain and simple blog. More often than not I read blog posts hosted by medium.com. It seems like they are really dominating these days (at least for tech blogs). And I understand why; the layout is so simple yet attractive and easy to read.
I expected to settle with a medium.com blog. However, it isn't possible to have ads in the blog posts. And I wanted that.
Wordpress has ruled the world for quite a long time. wordpress.com of course costs money if you want to make any money using ads. I don't expect many dollars in income, so I'd like to avoid any fixed costs. Then there is wordpress.org, but I don't really want to host it myself and stay up to date with security issues and all.
I looked at Google's blogger.com. Customizing the layout and templates I thought I was getting there. They have some really nice features. But I couldn't make it look and feel exactly right. And it felt cumbersome to do all the adjustments to get where I wanted.
I knew I didn't want to reinvent the wheel. If this had been somewhere from 2000 to 2007 I probably would have mashed something together using PHP and MySQL. But the thought of doing that in 2017 repelled me. I didn't want any database setup, SQL or stuff that takes time from creating the actual product.
Further on I didn't want some hosting at one of the big companies offering "free" (they tend to end up costing a few bucks) backend hosting with all the hassle of setting up a new environment and installing some SDKs that needs to be constantly updated or suddenly removes support for some version of whatever you use.
Basically I had the following requirements.
I was wondering about using just static HTML files. That would indeed answer most of my requirements. But then I thought about having a simple JavaScript SPA.
I suppose Angular 2 or React would be among the most logical choices of JavaScript frameworks in 2017. But I wanted productivity and not use a lot of time to learn yet another framework when all I want is a quick and simple blog, so I went for good old Knockout which I have used quite a lot as earlier on as an IT consultant.
While one might argue that Knockout is beyond its prime time, it sure works great and it is mature. And no matter which newer JavaScript framework I would go for it would soon be considered "old". If you haven't read the article How it feels to learn JavaScript in 2016 by Jose Aguinaga, you really should. It's painful to read, but so true.
I'm no designer. So I like to use UI frameworks that ensures that I can't get it completely wrong. I really like Google's Material Design and is used to using it on Android. I quickly found Materialize and haven't looked back. It has great features and is a joy to use. I just wish it didn't depend on jQuery.
If you're interested you can have a look at the source code for this blog at https://github.com/roys/js-web-blog. The project itself is licenced under the MIT License, but for the contents (posts and images) I reserve all rights.
]]>