Sales
0161 215 3814
0800 953 0642
Support
0800 230 0032
0161 215 3711

Dogma in IT: Why System Administrators Need to Rethink Trust

In the latest guest post from our resident Linux guru, we take a look at the idea of trust in managing systems integrity.

One of the things that really bothers me about the IT community at large, especially in larger IT setups is the speed at which good information propagates through the cyberscape. I feel that good advice trickles slowly, if at all.

IT, like many other professions, unfortunately suffers from dogma. The key point, however, that I’d like to highlight from the article is this:

“Once we think we know how something should be done, we keep doing it, then we teach others to do it the same way, and they in turn teach others until eventually you reach a point where no one remembers why something is done a certain way but we keep doing it anyway.”

Reliance on dogma really hampers, I suspect, where Linux systems could be (and arguably should be) in response to the new challenges that system administrators need to get to grips with.

Vendors such as Microsoft maintain an insistence, especially with their certification schemes, that  ‘their way is the only way’ that a process should be followed – regardless of whether or not a better means to achieve the process exists. This type of thinking doesn’t help and I am sure it leaks into the Linux landscape too.

Regardless, perhaps I am not alone in my feeling here about the way information flows through IT communities. Sites such as the excellent Server Fault clearly demonstrate demand for new ways to do things within IT.

So, why should I make this point? Well, the worst dogma in IT as I see it is in managing system integrity.

I generalize system integrity somewhat. So to clarify what I mean by systems integrity here are some things I would include on the list.

  • The hardware should behave in a trustworthy manner
  • The operating systems should behave in a trustworthy manner
  • Processes/Users should behave in a trustworthy manner
  • Data (shared libraries, files and block devices) should behave in a trustworthy manner.

For the sake of my own succinctness I’ll assume that we all agree that these items cover at least some of the notions of system integrity.

This is, probably, reasonably clear if you think about it, but – what is trust? And what is trustworthy? I am surprised by how few people challenged their notion of this term in the last 10 years. Here are some of the most common responses I have heard.

“A trustworthy system is one which is secure. It sanitizes input properly”
“A trustworthy system should be resistant to attack”
“A trustworthy system should be reliable”

I agree all of these are noble goals. I suspect most of the ‘groupthink’ comes from the Microsoft Trustworthy Computing Initiative, in itself a noble goal.

How about the less obvious term, what is meant by the phrase should behave? I am willing to bet expectations here have not changed in the last 15 years or so, for most environments.

I posit that the common dogma of trust, from a system administrator perspective is incomplete and inadequate. Trust is more than security. The trust most people expect effectively boils down to a developer’s notion of trust.

Since trust is built in chains, this ultimately means that system administrators trust their developers to make a trustworthy system.

This is flawed. Developers think of their processes as an independent system with an arbitrary amount of resources. System administrators should think of processes as groups of interdependent or conflicting objects with limited resources.

Making an independent process secure/reliable does not meet the goals of system administration and is inadequate. The NSA points out application software will have bugs and finding an exploitable one is inevitable.

People’s expectation of should is also inadequate. If an application should be trustworthy but isn’t, what then and what impact would that have on other processes? How many processes or application stacks do we run as system administrators which should be trustworthy?

I am willing to bet 99% of processes out there are trusted within the confines of what they are meant to do purely at the whim of what the developer has got the code to do and what the system administrator or package maintainer hopes it will do.

I propose that in user space “should” actually means “demands”. As a Linux system administrator you can make quite specific demands that your processes are actually trustworthy.

So, lets look at what I believe what trustworthy should look like.

  • A system should maintain least resources.
  • A system should maintain least privilege.
  • A system should have a maximum resource.
  • A system should immediately report anomalous behaviour.
  • A system should load and deal with data in a measurably expected way.

I suspect we are some way of from truly trustworthy systems that go down every layer from hardware up (but the hardware does exist for it already). I believe there are probably more than this, depending on the layer you are dealing with. In user space, “should”  is “will always be” and can be enforced by the kernel.

And this, for the most part is the most frustrating part of the whole thing. Whilst Linux is not perfect, what you can do in Linux is pleasantly surprising. The Linux Kernel Devs really do have their heads screwed on. The Linux system administration community at large are, I am ashamed to say, about 5 years behind the times with what we can actually do to tackle trustworthiness on Linux. And all this because so little knowledge or momentum is given to new interesting tech, instead we perpetuate the same dogmas.

I want to focus on making your Linux enforce trust demands to user space but the subjects are far too long altogether for one article. But to give you some idea of things you can do now in Linux to meet these notions of trust and, perhaps, sometime in the near future. Here are some of the technologies or ideas that try to tackle this problem:

A system should maintain least resources

Present some arguments for some of the dogmas in software design which as system administrators we are often stuck with and what philosophies to look for when choosing a design.

I’ll look at how we can truly constrain resource for processes efficiently with control groups.

A system should maintain least privilege

I’ll talk about Mandatory Access Control, why DAC is inadequate for user space constraints and the reason why SELinux was meant to complement an existing system.

A system should have a maximum resource

Again, the purpose of  control groups and I’ll discuss a commonly held bad dogma in system administration about performance and how to meet the technical expectations of your customer.

A system should immediately report anomalous behaviour

I’ll focus on why active monitoring can be inadequate or ineffective and look at how passive monitoring with audit deals with trust breaches far more effectively.

Take a look at some of the technologies in control groups to monitor utilization.

A system should load and deal with data in a measurably expected way.

Take a look at file integrity monitoring systems and how to cope with offline tampering with dm-crypt and IMA.

Share with:

Enjoy this article?