After some studies, or perhaps a specialist course or presentation you’d like to start to implement in your company the best practice you have learnt, and perhaps start a new and better era for your IT department.
But it seem that something always go in the wrong way or there are unexpected difficulties that make all your plans, and dreams, fails; and after some fight you usually end saying “ok that WAS the best practice and we are sure to don’t follow it”.
This is my list of things I’ve found impossible to realize in some years of work.
Proper planning for projects.
Before starting anything, I’d like to know the main goal of a projects, the customer requisites, the budget, for how much time the project must be online or work and how many people can work on it. After knowing all these information do a project with all the needed software and hardware and calculate the timing to complete all phases.
With a proper planning all become easier and you can install and maintain many systems and services online without too much trouble.
What happen to me is usually:
“We got a new customer, he want in 2 weeks his new portal ready, he has a lot of access to his website that must be always online so build something with high-availability !”
“We got a new customer, we must end in 2 weeks his mega project XX, but it’s only a couple of Linux machine with YYY (put an unknown distro here) that use this software (put an unknown software here) with this DB (put an unknown DB here).”
“We got a new customer, he want his e-commerce site, in load balance between 4 machines, everything replicated and maintained by us, he ask a 99.9999% uptime, the budget for the project it’s YYY (put an excessively low-budget here).”
Security it’s an important thing in every company, you can have good services, but your customers would be happy knowing that all their information have been stolen (Sony ?).
Security it’s not a thing you can do in your spare time or 1 time every 6 months, it’s a set of best practices, rules, configurations that must be followed every day; sadly until it’s too late no one seem interested in security.
For example you could have a standard set of rules to make account on machines and set up firewalls on every Linux machine and a standard set of configuration; and this is a good starting point, but this become pointless if you receive requests like:
“Add an account on the DB server for John Smith, it’s a new consultant that must be able to start/stop and change the DB files.”
“Open these ports on the firewall of the machine XX (or stop the firewall), the development team must do something and the firewall it’s stopping them.”
“Don’t update the software on these servers, the development team is not sure that with the new version of libXX their software work.”
And this one bring on a new point:
My motto could be “better updated than sorry”, on my desktops i update the OS the day it become stable and on servers I’d like to have a planned monthly day for updates, and be ready to do some extra update if some big security bug comes out.
Having an updated system means that you are safe from know security bug, code bug and you have all the latest features of that software, naturally if you have used a software you should know what means that “function XX now it’s called XX_h”, and know if your software could be affected or not.
If in doubt use a test server to apply the updates and if all is fine update the production server.
What happen many times, is that a lot of managers use the motto “Until it works don’t touch it”, so I’ve seen installation of Linux distributions made by the original CD and never updated, “the service it’s up why change something” ?
Or worst, many people who don’t understand fully the open source now use open CMS (Drupal, Joomla, WP) because they are great and are free, for them the magical word it’s free of license, what these people don’t understand is that you cannot keep an open source CMS of 2 years ago online, with everyone knowing of his security bugs and hope that it will work forever, I’ve seen at least a dozen of sites hacked with know CMS bugs.
Or the worst thing it’s when i ask to developers if i can upgrade XX (put a perl module, php, or a JVM), and usually their answer it’s “We certify only version FF, the one that we are using from 1 year ago” and so i cannot update anything…
To make the things easy the best solution would be: to have 1 distribution at the same level on all servers, use only 1 application stack to deliver services (java, ruby, php).
I can understand this can become too strict, over time new release of the distribution come out and it’s not so easy to upgrade all server, or different group can use different software.
What i really don’t understand it’s the use of many different solutions in the same data center, such as: Red Hat for the Jboss machines, Debian for the Lamp servers, CentOS for the development machine and Suse for the DB; and when you ask why ? Sometime the answer is “the former sys admin liked that Distro, or the consultant told us to use that distro” that for me have no sense at all, having multiple distro means that your system administrator must have much more knowledge.
And the same thing apply to the software stack for the service, recently I’ve seen a service that use Tomcat with a Java application for a section of the site, Apache and perl for the other and php to run the scheduled tasks via crontab. A madness in my eyes…
I love be a system administrator and in my dreams there is a logic behind all the machine and services of which I’m responsible , but like many dreams this clash with the reality..
- Linux Terminal: An lsof Primer
- How to check if you are vulnerable to shellshock
- Ripping DVD with Handbrake on Linux
- Linux: Timeouting commands in shell scripts
- Switching to Linux, Checklist
Find me on Google+