Take it a step further. Set up a locked down bastion server that can only be reached from your IP address, and disable all access to your DB from anything but your application and the bastion server. Then tunnel your local queries through your bastion server.
Another option is to install these tools at a separate domain and setup HTTP authorization on a webserver. So automated bots scanning for vulnerable apps will not get even to the login page.
Interestingly, Google specifically excludes CSV vulnerabilities like that from their bug bounty program.
> CSV files are just text files (the format is defined in RFC 4180) and evaluating formulas is a behavior of only a subset of the applications opening them - it's rather a side effect of the CSV format and not a vulnerability in our products which can export user-created CSVs. This issue should mitigated by the application which would be importing/interpreting data from an external source, as Microsoft Excel does (for example) by showing a warning. In other words, the proper fix should be applied when opening the CSV files, rather then when creating them.
> > A lack of filtering on user CSV output that could allow an attacker to run arbitrary code on an administrator's computer.
Iff the user has Excel, and explicitly allows it to run macros in a CSV file. It's already a stretch to call this a phpMyAdmin vulnerability, much less a "medium severity" one.
> > Improper cookie invalidation that could allow an attacker to unset internal global variables.
From the PDF report:
> Note: Because of the large amount of global variables, and the relatively short nature of this assessment, NCC Group was unable to fully determine the impact of this vulnerability.
It might be serious, but they didn't have enough budget to make a proper analysis.
I really hate the idea of having a web interface to my database anywhere, no matter how secure they say it is. Social engineering (over direct "hacking") lends itself to circumventing technical security.
No matter their technical security (Although I'm super happy they test phpmyadmin!), I still wouldn't trust it on my servers.
Granted you can lock phpmyadmin down via ip restriction, vpn, etc - that's definitely good, but, if you can forgive a bit of generalization, those measure tend to be above people's head or too restrictive for those using phpmyadmin.
If we do connect to a database using a GUI (usually an app instead of phpmyadmin), however, my preference is through an SSH tunnel. This lets us connect securely (over SSH), and still allow MySQL to not be globally accessible from the outside world - meaning, you can still using MySQL's built-in network security features (bind-address and username hosts, along with firewall restrictions) to lock down MySQL.
Why do you presume that web app has to be run public? You can easily limit access to web app by IP, or you can put it on a private network that you will access through VPN. That would make it more secure than most web services that we trust regularly, like gmail or paypal...
Stupid question, how does a security audit work? Do the consultants just read through the code? Do they try to find security bug like they do on bug bounty programs?
I'm not an expert in this field, but we recently did a security audit. The auditors get access to the code in order to evaluate it for vulnerabilities. In our ruby application, they also check gems that we are using (through open source tools albeit).
They also did an in-app audit where they tried to break the application however they might see that. Having access to the code helps with this.
When you get audited by potential customer, it usually involves not having code access and trying to penetrate the app without that access.
The first few chapters of the book, "The Art of Software Security Assessment: Identifying and Preventing Software Vulnerabilities" outlines a very meticulous process of reviewing source code for vulnerabilities in a professional manner.
Yes. The way it works is that two smart hackers go into the office each day and spend eight hours trying to think of as many creative ways of tapping on the target application as possible. Nothing is off-limits except what is agreed up front, but you're obviously expected not to interfere with production operations. The client is generally expected to set up a testing environment substantially similar to production, but usually consultants just have to muddle through with whatever the clients give, which may not (and usually isn't) populated with production data. As long as consultants are given the ability to enter in data themselves, i.e. admin accounts, this is fine, because data entry is their job. Hacking is basically large-scale data entry, and it's as boring as it sounds: very tedious, interspersed with excitement when you see an XSS popup window or figure out a clever way to get a reverse shell.
If after two weeks of this you found no medium-or-higher security vulnerabilities, you were generally considered to not be doing a very good job.
The secret of the industry is that at the end of this process, you are deemed secure. That's the point of the security audit. But if it's not a repeating process, it doesn't work. It may work for that particular version of the application, and it may substantially improve the security in that old vulnerabilities are found and fixed. Let me abandon this train of thought and put it another way:
This post is a press release saying that phpMyAdmin is secure. But that's not how this works. High-severity vulnerabilities are often found near the end of an audit. This is because the consultants have had time to become intimately familiar with the application. But the late stages of an audit are exactly when the consultant's time is mostly spent writing reports for the existing findings, and not doing pentesting. This means that two weeks is often just long enough to start finding serious vulns, since week one can be devoted to pentesting and week two is mostly reporting from Tuesday onward. But that "mostly reporting" process gets the consultant thinking about the application as they're doing the writeups, which -- you guessed it -- leads to realizing that there's something clever they could try. And when they try that clever thing, sometimes it yields a high-severity vuln. It's the opposite of a mechanical, thoughtless process.
That means your results will vary depending on who, specifically, is doing the auditing. If you run your application through the consulting process twice -- same version, same staging data, same everything -- it's likely that you'll get wildly different results, because the pentesters are different people.
It has to be an on-going process in order to be effective. And it can be highly effective. It just costs so much that only the most massive companies can afford this.
That's not to say this audit wasn't effective. It's possible that whoever did the audit found substantially everything. But it was interesting to discover how often this was not the case, in a "How'd they miss this last time?" sort of way.
Good question. It can be all or none of the above. Here's what happens at a high level:
Once a company decides it needs a security assessment performed on an application, it engages with a consulting firm. Consulting firms generally offer a variety of services, from web and mobile application penetration tests, to cryptanalysis (implementation and design), to reverse engineering and binary penetration testing, with source code audits sprinkled throughout (or as standalone assessments). Let's assume they move forward with a web application assessment.
The company decides if it wants a source code audit, a penetration test or both. The most comprehensive assessments will include source code and unmitigated access to a staging environment that the consultants do not have to worry about destroying. However, they could also decide they don't want to hand over the code (common in things like sensitive financial applications or in applications with protective developers). I've worked on many assessments where I had no source code - this is called a "black-box" assessment.
Conversely, an assessment might consist of a source code audit with no penetration test! This is less common, but it's particularly suited for engagements where the developers are fairly sure they've eliminated the most common issues and they are really focused on obscure errors, logic flaws and race conditions.
It really depends on the type of security audit. You can have more exotic ones, like black-box cryptanalysis where a company hands Riscure a proprietary payment mechanism and there is heavy reverse engineering and side channel analysis. It can also be very vanilla, like the web application penetration tests that bug bounty programs attempt to simulate. Companies decide what they are going to do based on their application's profile and their goals.
Putting this all together, these are the stages of a traditional security audit from a high-quality firm:
Step 1: A company receives several proposals and decides which company to move forward with based on which statement of work most closely matches their security goals, timing, budget and desired expertise. Then they decide on a start date.
Step 2: Representatives from the company (generally a technical manager, a security engineer or manager if the company has one and at least one developer) have a conference call with representatives from the security firm (generally, the security consultants performing the assessment, an account executive and a technical manager) to "kick off" the assessment with technical and logistical engagement planning. Things like "How will we access the staging environment?" and "Is there anything off-limits" are fleshed out here as well as reminders about scope and scheduling.
Step 3: Things like source code, infrastructure/application/API documentation, PGP keys, etc. are securely exchanged and verified. This comes out of a list of mutual action items from the kick-off call.
Step 4: The actual assessment happens, generally in a period of one to three weeks. I've never been involved in an assessment less than one week long, and assessments longer than four weeks usually need to re-scope or they become monolithic and difficult to coordinate. Progress reports with findings and testing data are securely sent to the company from the security firm.
Step 5: The assessment is finished and a final deliverable is securely sent to the company from the security firm. An optional re-test assessment might happen a few weeks or months later to confirm if the findings have been satisfactorily resolved.
This is based on my knowledge of having worked in security consultancies, engaging with them as an in-house security engineer and running my own consulting firm.
Despite setting several security-related session configuration values, they don't touch the cookie entropy fields, which means a potential session fixation vulnerability.
This might not be a concern for most users: typically your distro ships a php.ini configured to read at least 16 bytes from /dev/urandom. But not always! Many projects set cookie.entropy_length and cookie.entropy_source just to be sure.
Given that the assessment occupied two weeks with two consultants, between $25,000 - $35,000.
I don't have intimate knowledge of NCC Group's pricing structure because I don't work there. But I have friends who do, and similarly situated consultancies that I've worked for are in the $10,000/week range for a one-off assessment with non-senior staff. This is also somewhat close to what I charge through my own smaller consulting practice.
Now, if there was specialty work (like crypto), particularly comprehensive work, more consultants billed on the assessment than usual or senior/principal consultants billed on the assessment, the total fee would go up. This is why I added a $10,000 premium to my estimate; the source code analysis detailed in this report might qualify as "non-standard."
That said, NCC might have worked on a discount for the opportunity to advertise that they were involved in the audit. But I don't see this assessment having costed anything less than $20,000 even in a charitable situation.
What are some good alternatives? I've been using DataGrip the last couple of months, but prior to that I used phpMyAdmin all the time because I just couldn't find anything else as useful for MySQL. And even with DataGrip, I sometimes have to log in to phpmyadmin because there's stuff DataGrip doesn't do....
It usually comes by default with CPanel and Plesk on web servers, as well as MAMP / WAMP / XAMPP for development environments. In my experience it's still used a lot by junior devs who haven't yet learned any different, and people with absolutely no idea what they're doing.
Yes.
A lot of times its available on shared hosting behind some login. Esp, if you don't have shell access on inexpensive hosts.
Its been available at places I've worked. It was locked down by IP. The database was restricted to access by ip too, in theory making outside access more difficult.
I used it a lot (less now) and honestly, I kind of like it. The interface is a little kludgy, but it gets the job done. Queries are editable, exportable in various formats. You can construct a search via gui then edit the SQL it generates . It seems to have a lot of functionality built in, user/table management etc..
For local instances I use sequel pro too (the ssh login function it has is nice and works well).
Is there much sense in auditing things that are usually used by the admin and are by design exposing a lot of control of the server? Sure it must not be exposed to an outsider, but if auth is done right, it doesn't matter how far the insider can get... IMO
I encourage everyone to use MySQL Workbench over SSH. For whatever reason people seem to not understand the concept of SSH and the inherent security it provides. But, once you explain to folks how to use it effectively it really is a good balance of security and usability.
That is misleading. They said they had the ability to unset global variables. Looking at the PHPMyAdmin codebase, I understand they didn't have the time.
This is not relevant. An audit cost a substantial amount of money, you wouldn't expect your consultants to spend a lot of time exploiting or building Proof-of-Concepts. If you have a time-boxed assessments, you want the consultants to cover the most ground and not spend too much time on a finding.
If fixing the bug is less work then determining exploitability, fixing it and moving on is just economical. Digging in further would only have distracted from looking for other vulnerabilities.
[+] [-] Xeoncross|9 years ago|reply
Here is a command that forwards all traffic to localhost:3306 across the ssh tunnel to example.com:3306 (the mysql default port).
I would never run a DB admin application on the live server because it's just one more piece that might open a security hole.[+] [-] xrstf|9 years ago|reply
[+] [-] RhodesianHunter|9 years ago|reply
[+] [-] codedokode|9 years ago|reply
[+] [-] kgdinesh|9 years ago|reply
[+] [-] SeoxyS|9 years ago|reply
[+] [-] Shorel|9 years ago|reply
I use it with Wine in Ubuntu.
[+] [-] sync|9 years ago|reply
> Improper cookie invalidation that could allow an attacker to unset internal global variables.
Those don't count as serious issues? Props to them for making the report public though.
[+] [-] sciurus|9 years ago|reply
> CSV files are just text files (the format is defined in RFC 4180) and evaluating formulas is a behavior of only a subset of the applications opening them - it's rather a side effect of the CSV format and not a vulnerability in our products which can export user-created CSVs. This issue should mitigated by the application which would be importing/interpreting data from an external source, as Microsoft Excel does (for example) by showing a warning. In other words, the proper fix should be applied when opening the CSV files, rather then when creating them.
https://sites.google.com/site/bughunteruniversity/nonvuln/cs...
[+] [-] creshal|9 years ago|reply
Iff the user has Excel, and explicitly allows it to run macros in a CSV file. It's already a stretch to call this a phpMyAdmin vulnerability, much less a "medium severity" one.
> > Improper cookie invalidation that could allow an attacker to unset internal global variables.
From the PDF report:
> Note: Because of the large amount of global variables, and the relatively short nature of this assessment, NCC Group was unable to fully determine the impact of this vulnerability.
It might be serious, but they didn't have enough budget to make a proper analysis.
[+] [-] fideloper|9 years ago|reply
No matter their technical security (Although I'm super happy they test phpmyadmin!), I still wouldn't trust it on my servers.
Granted you can lock phpmyadmin down via ip restriction, vpn, etc - that's definitely good, but, if you can forgive a bit of generalization, those measure tend to be above people's head or too restrictive for those using phpmyadmin.
If we do connect to a database using a GUI (usually an app instead of phpmyadmin), however, my preference is through an SSH tunnel. This lets us connect securely (over SSH), and still allow MySQL to not be globally accessible from the outside world - meaning, you can still using MySQL's built-in network security features (bind-address and username hosts, along with firewall restrictions) to lock down MySQL.
[+] [-] labster|9 years ago|reply
Aren't those called "applications"? And yes, I hate them too.
[+] [-] ivanhoe|9 years ago|reply
[+] [-] dvt|9 years ago|reply
[+] [-] igravious|9 years ago|reply
[2] https://cure53.de/
[3] https://www.nccgroup.trust/uk/
[+] [-] arrmn|9 years ago|reply
[+] [-] sb8244|9 years ago|reply
They also did an in-app audit where they tried to break the application however they might see that. Having access to the code helps with this.
When you get audited by potential customer, it usually involves not having code access and trying to penetrate the app without that access.
[+] [-] jquast|9 years ago|reply
[+] [-] opinion1|9 years ago|reply
If after two weeks of this you found no medium-or-higher security vulnerabilities, you were generally considered to not be doing a very good job.
The secret of the industry is that at the end of this process, you are deemed secure. That's the point of the security audit. But if it's not a repeating process, it doesn't work. It may work for that particular version of the application, and it may substantially improve the security in that old vulnerabilities are found and fixed. Let me abandon this train of thought and put it another way:
This post is a press release saying that phpMyAdmin is secure. But that's not how this works. High-severity vulnerabilities are often found near the end of an audit. This is because the consultants have had time to become intimately familiar with the application. But the late stages of an audit are exactly when the consultant's time is mostly spent writing reports for the existing findings, and not doing pentesting. This means that two weeks is often just long enough to start finding serious vulns, since week one can be devoted to pentesting and week two is mostly reporting from Tuesday onward. But that "mostly reporting" process gets the consultant thinking about the application as they're doing the writeups, which -- you guessed it -- leads to realizing that there's something clever they could try. And when they try that clever thing, sometimes it yields a high-severity vuln. It's the opposite of a mechanical, thoughtless process.
That means your results will vary depending on who, specifically, is doing the auditing. If you run your application through the consulting process twice -- same version, same staging data, same everything -- it's likely that you'll get wildly different results, because the pentesters are different people.
It has to be an on-going process in order to be effective. And it can be highly effective. It just costs so much that only the most massive companies can afford this.
That's not to say this audit wasn't effective. It's possible that whoever did the audit found substantially everything. But it was interesting to discover how often this was not the case, in a "How'd they miss this last time?" sort of way.
[+] [-] dsacco|9 years ago|reply
Once a company decides it needs a security assessment performed on an application, it engages with a consulting firm. Consulting firms generally offer a variety of services, from web and mobile application penetration tests, to cryptanalysis (implementation and design), to reverse engineering and binary penetration testing, with source code audits sprinkled throughout (or as standalone assessments). Let's assume they move forward with a web application assessment.
The company decides if it wants a source code audit, a penetration test or both. The most comprehensive assessments will include source code and unmitigated access to a staging environment that the consultants do not have to worry about destroying. However, they could also decide they don't want to hand over the code (common in things like sensitive financial applications or in applications with protective developers). I've worked on many assessments where I had no source code - this is called a "black-box" assessment.
Conversely, an assessment might consist of a source code audit with no penetration test! This is less common, but it's particularly suited for engagements where the developers are fairly sure they've eliminated the most common issues and they are really focused on obscure errors, logic flaws and race conditions.
It really depends on the type of security audit. You can have more exotic ones, like black-box cryptanalysis where a company hands Riscure a proprietary payment mechanism and there is heavy reverse engineering and side channel analysis. It can also be very vanilla, like the web application penetration tests that bug bounty programs attempt to simulate. Companies decide what they are going to do based on their application's profile and their goals.
Putting this all together, these are the stages of a traditional security audit from a high-quality firm:
Step 1: A company receives several proposals and decides which company to move forward with based on which statement of work most closely matches their security goals, timing, budget and desired expertise. Then they decide on a start date.
Step 2: Representatives from the company (generally a technical manager, a security engineer or manager if the company has one and at least one developer) have a conference call with representatives from the security firm (generally, the security consultants performing the assessment, an account executive and a technical manager) to "kick off" the assessment with technical and logistical engagement planning. Things like "How will we access the staging environment?" and "Is there anything off-limits" are fleshed out here as well as reminders about scope and scheduling.
Step 3: Things like source code, infrastructure/application/API documentation, PGP keys, etc. are securely exchanged and verified. This comes out of a list of mutual action items from the kick-off call.
Step 4: The actual assessment happens, generally in a period of one to three weeks. I've never been involved in an assessment less than one week long, and assessments longer than four weeks usually need to re-scope or they become monolithic and difficult to coordinate. Progress reports with findings and testing data are securely sent to the company from the security firm.
Step 5: The assessment is finished and a final deliverable is securely sent to the company from the security firm. An optional re-test assessment might happen a few weeks or months later to confirm if the findings have been satisfactorily resolved.
This is based on my knowledge of having worked in security consultancies, engaging with them as an in-house security engineer and running my own consulting firm.
[+] [-] pg_is_a_butt|9 years ago|reply
[deleted]
[+] [-] CiPHPerCoder|9 years ago|reply
For example:
https://github.com/phpmyadmin/phpmyadmin/blob/4cd8ab8a957a23...
Despite setting several security-related session configuration values, they don't touch the cookie entropy fields, which means a potential session fixation vulnerability.
This might not be a concern for most users: typically your distro ships a php.ini configured to read at least 16 bytes from /dev/urandom. But not always! Many projects set cookie.entropy_length and cookie.entropy_source just to be sure.
[+] [-] fauria|9 years ago|reply
[+] [-] dsacco|9 years ago|reply
I don't have intimate knowledge of NCC Group's pricing structure because I don't work there. But I have friends who do, and similarly situated consultancies that I've worked for are in the $10,000/week range for a one-off assessment with non-senior staff. This is also somewhat close to what I charge through my own smaller consulting practice.
Now, if there was specialty work (like crypto), particularly comprehensive work, more consultants billed on the assessment than usual or senior/principal consultants billed on the assessment, the total fee would go up. This is why I added a $10,000 premium to my estimate; the source code analysis detailed in this report might qualify as "non-standard."
That said, NCC might have worked on a discount for the opportunity to advertise that they were involved in the audit. But I don't see this assessment having costed anything less than $20,000 even in a charitable situation.
[+] [-] smaili|9 years ago|reply
[+] [-] creshal|9 years ago|reply
For us it sees plenty of use with poorly developed legacy software (e.g. Wordpress).
[+] [-] donjh|9 years ago|reply
[+] [-] waterphone|9 years ago|reply
[+] [-] 23andwalnut|9 years ago|reply
[+] [-] blowski|9 years ago|reply
[+] [-] acomjean|9 years ago|reply
Its been available at places I've worked. It was locked down by IP. The database was restricted to access by ip too, in theory making outside access more difficult.
I used it a lot (less now) and honestly, I kind of like it. The interface is a little kludgy, but it gets the job done. Queries are editable, exportable in various formats. You can construct a search via gui then edit the SQL it generates . It seems to have a lot of functionality built in, user/table management etc..
For local instances I use sequel pro too (the ssh login function it has is nice and works well).
[+] [-] hackaflocka|9 years ago|reply
I can't thank the people who created it and maintain it enough.
[+] [-] callesgg|9 years ago|reply
But to get the good stuff one has to configure it properly, and generally people don't bather configuring it. They just place the files in a folder.
No other web based tool for any database that i have tried even comes close.
[+] [-] 20years|9 years ago|reply
[+] [-] homakov|9 years ago|reply
[+] [-] EGreg|9 years ago|reply
[+] [-] hayleox|9 years ago|reply
[+] [-] sixhobbits|9 years ago|reply
https://twitter.com/totally_unknown/status/74275332346864026...
[+] [-] oaf357|9 years ago|reply
[+] [-] scottydelta|9 years ago|reply
repetition of "completing" in first line.
[+] [-] shaunrussell|9 years ago|reply
[+] [-] creshal|9 years ago|reply
I wouldn't read too much into it.
[+] [-] Johnny_Brahms|9 years ago|reply
[+] [-] baby|9 years ago|reply
[+] [-] Ded7xSEoPKYNsDd|9 years ago|reply