Like all new technologies, the potential benefits of Linux also carry risk for pharmaceutical developers.
The inspiration for this column came during a phone conversation with my father-in-law, who uses computers the way most of us dofor email, web surfing, and, increasingly, for digital image management. Several years ago, I convinced him to switch from the Mac platform to a PC platform, and the complaining and gnashing of teeth never has stopped. Lets face itdespite dramatic improvements in its user interface, Windows remains more challenging than the Macintosh for the casual user.
In our conversation, I mentioned casually (no, enthusiastically) that I was setting up a Linux server at home. Rather than the question I expected (Whats that?) he replied, Oh yeah, I was thinking of trying Linux out, too. I heard it was easier than the Mac. My shock was immediately interlaced with the fear and horror of becoming a full-time dedicated Linux Help Desk for the next year or two.
On further questioning, it became obvious that he had become enamored with all the talk of Linux in the press, but really knew very little about it. Right then I realized that this was probably a common affliction that some ACT readers may share. And so, this column is about Linux, open source, and their potential in the pharmaceutical industry.
What is Linuxand do I want it? If you havent asked this question, I hope that someone in your organization or company has. The media attention and hype are based on the significant power and potential of Linux in the appropriate circumstances. Dont take this the wrong wayI am not suggesting that anybody should move from Microsoft Windowsbased servers and workstations to Linux servers. Rather, I am urging that now is the time to understand Linux and appreciate its benefits and risks in planning for the future.
It is important for everyone in an organization to have at least a fundamental understanding of the risks and benefits of Linux. As end users, we are all likely to be affected by Linux in the coming decade. To understand Linux, we must first think about operating systems in general, and understand the history of how Linux was developed.
Operating systems
The computer is, in the end, a collection of assorted pieces of hardware. The earliest computers in commercial use were programmed directly from a flip-switch control panel or through punch cards and paper tape, first in machine language and later in more user-friendly assembly language. These languages took operators step-by-step through every operation a computer performed. The closest analogy might be addition and subtraction on a simple calculatortake one number from here and another number from there, add them together, and put the result over here. Stringing thousands of these instructions together made it possible to build programs that performed sophisticated calculations and complex tasks. The computers that delivered men to the moon worked this way. In those systems the computer program itself often managed the input and output of data, and other housekeeping tasks.
Computer systems evolved by identifying certain repetitive tasks (reading data on a tape, writing data to a disk) and creating an operating system (abbreviated as OS) that handled those tasks. Computer programs running under an OS would still do the calculations, but all the interactions with the hardware would be performed by the OS. As computers became more sophisticated, so did the OSs that they used. The first computers could run only one job or program at once; later OSs managed simultaneous or queued jobs. Eventually, OSs were required to do more and more sophisticated operations, such as interacting with floppy disks, hard disks, printers, monitors, networks, USB ports, CD-ROM burners, DVD readers, and other devices. As you can imagine, a robust OS must be capable of interacting with all of those devices, produced by dozens, if not hundreds, of manufacturers. You may be interested to know that the first Microsoft version of an OS was named MS-DOS, for Microsoft Disk Operating System (now you know!).
The prototypical OS consisted of a kernel, a shell, and related applications. The kernel is the part of the OS that interacts directly with the hardware components, giving them direct instructions. The shell is also known as a command line interpreter. It takes simple, terse single-line statements (for example: dir *.doc) and translates them into complex instructions to the kernel. The result of the aforementioned DOS command would be a listing of all MS Word documents in the current directory. The command line interpreter in MS-DOS is incorporated in a single file, known as command.comsee if you can find it on your computer (but do not delete it). Programs running under a modern OS can operate through the shell, or can interact directly with the kernel for OS functions. The final components of an OS are related applications such as the format command in MS-DOS, or the simple text editor Emacs that is included with MS-DOS (and other OSs, including Linux). These applications perform some of the core functionality required in an OS, and operate much like a programthrough the shell, directly to the kernel, or both.
Although very powerful, the use of command lines requires a certain amount of commitment and computer sophisticationsomething that not everyone is willing to work to develop. To improve the usability of the computer, a graphical user interface (GUI) was developedfirst by Xerox, then Apple Macintosh, and eventually Microsoft Windows. The GUI creates a more intuitive, easier method for performing OS operations that appeals to less sophisticated users of computers. Almost every OS today includes a GUI, especially for workstation and desktop users.
Linux emerges
The story of Linux starts with the UNIX system, the OS that is used in many of the worlds workhorse server-class computing systems. UNIX is a command lineoriented OS initially developed at AT&T in 1969 (and based on an earlier MULTICS system developed by MIT, AT&T, Bell Labs, and GE), with a subsequent long and complex commercial history. The system was adopted in academic institutions (most notably UCBerkeley) and eventually in multiple commercial implementations in the 1980s. Over several decades, the UNIX system matured along several different paths, developing into a robust yet nonstandardized OS, with variants that include Sun Solaris, HP-UX, and IBMs AIX. Linux arose from this milieu because of two key developmentsthe development of a radical fringe at MIT, known as GNU (pronounced guh-noo), and the development of a published standard for the UNIX OS.
The GNU (meaning GNUs not UNIX) project was conceived and launched in 1983 by MIT Professor Richard Stallman as a completely free, open source OS. The components of the OS were created by software developers all over the world, replicating, borrowing, and extending many of the functional features of the commercial UNIX system. The result, in 1990, was a fully featured OS that had a complete shell and OS applications, significant added functionality through applications and programs, but was missing the key componentan open source kernel. The requirements for a UNIX kernel were made publicly available through a UNIX standard effort, named POSIX. POSIX began in 1985 as a means to bring together the many different versions of UNIX that had developed.
With GNU and POSIX, the stage was set for the emergence of Linuxwhich in its most fundamental form was the UNIX-like kernel that GNU was missing. The Linux kernel was written by a hacker from Finland named Linus Torvalds (see Linuss First Steps box), and announced to the world unceremoniously in 1993 in a Usenet group (like a bulletin board) posting. He named the kernel Linux (of obvious origin!). With a host of assistance from volunteer developers Linus brought the GNU shells and tools to the Linux kernel, creating the first completely free, open source UNIX that conformed with the POSIX standard. Within a short time Torvalds had released the UNIX-like kernel to the public, through the GNU foundation, which along with the GNU tools and shells composed a complete UNIX-like operating system, also referred to as Linux.
At first Linux was very much like MS-DOS; a command line OS that closely replicated the user interface and functionality of UNIX. Subsequently, two Windows-like user interfaces have been developed, so that Linux could have a look and feel similar to the Macintosh or Windows. Over the past few years a number of commercial vendors have begun to distribute Linux bundled together with a large number of GNU-developed programs that run the gamut of functionalityfrom full Word-compatible word processors, to spreadsheet programs, graphic programs, and many, many more. The Linux user has most of the general needs programming tools available to the typical Windows user. The companies distributing Linux (whose names include Red Hat, Debian, Mandrake, and others) are required, under the GNU license, to distribute their software free. They also validate and monitor the quality of the distribution, and sell the software along with support services, including upgrades, security fixes, custom consulting, etc. for corporate users. The goal is to provide the same type of stability and support in an operating system that can be found in the Microsoft or Apple OSs.
Why Linux?
The use of Linux is on a rapid upswing for a variety of reasons. First of all, Linux is free at the extreme, and very inexpensive in most implementations. Businesses that are conserving cash or that have a great need for computers (in grid computing, for example) can save hundreds of thousands to millions of dollars on OS licenses and related software by using Linux. Second, many IT professionals are highly experienced managing UNIX servers, and find it easy to port their skills to Linux server management. Third, Linux and its cousin Unix were developed over time to be highly secure, scalable, and reliable. Linux servers can go for months or years without crashing or requiring a reboot. One of the reasons for this is that there are thousands of programmers debugging and fixing Unix applicationsall sharing information freely with each other in an open community.
Linux and related applications are all open sourcewhich means that any bugs or local customizations can be worked on by anyone who is interested. Nobody has to beg, hope, and pray that a software vendor will fix a bug.
These benefits have made Linux a very popular platform for Web servers, file servers, email servers, and other network applications. The recent intense efforts are also creating a more high-performance environment for Linuxone that could work on mission-critical data such as genomic databases.
The problems with Linux
If the benefits of Linux make it sound like a panacea, beware. Linux has many downsides to consider.
Limited compatibility. Linux may not yet be the ideal solution for the desktop. Although there are several MS Word compatible Linux word processors, none of them allow the seamless transfer of documents between users to which we have become accustomed. In addition, other programs and applications may not even exist for the Linux environment. Linux desktop users may have Windows envy and suffer for it.
Open source risk. Open source development has very little control other than that imposed by the developer. Software may be developed without proper methodology and controls. Although the software may have been exposed to thousands of beta testers, the particular function that you are interested in may not have been well tested. The quality and reliability of Linux applicationsand, for that matter, the OSis somewhat hit or miss. As discussed earlier, much of the uncertainty is removed by the emergence of commercial Linux vendors, like Red Hat. However, it is still not clear which, if any, of these Linux companies will survive. The risk of committing to one of them, and their particular flavor of Linux, must be considered when deciding to choose Linux for corporate use.
Uncertain application support. With some applications, the user is dependent on the loosely tied community that is developing them. If they lose interest or move on to some other application, all support for the abandoned applications could disappear. Linux also tends to lag behind Windows in supporting new hardware for a variety of reasons. Some hardware vendors keep their interfaces proprietary and arent willing to develop a Linux interface; in other cases there is just a lag between when the hardware is released and someone in the Linux community decides to write a driver for it. Many of these issues have a single common threadthere is often no commercial entity to hold accountable for the quality, performance, or functionality of Linux and Linux applications. This single fact is often more than many corporations are willing to tolerate.
Difficult for novice users. Linuxand this is for my father-in-lawis just not that easy for novices to use. Those who use wizards to configure their home computers will find that most everything in Linux requires tweaking in the configuration files of the program. This is beyond the capabilities of most people who are not IT professionals.
Pharmaceutical development
The pharmaceutical industry has stringent requirements for software and the OS environment that may be problematic for Linux. Software used for data to support an FDA submission must meet the requirements of 21 CFR 11. Those include, among many requirements, a strict software development methodology and validation. The validation also requires a dependable, robust, and reliable operating system.
Because the OS and many Linux software applications are developed by a somewhat independent community of programmers, it is almost impossible to demonstrate adherence to 21 CFR 11 requirements. Although internally developed and commercial Linux software may be produced under these stringent requirements, these will still have to operate under an OS that is developed by, to put it simply, volunteers. The attractive aspects of Linux may well drive some pharmaceutical companies or a consortium of companies to develop a strategyperhaps one that involves a validation of particular Linux OS operating systems. However, this road is uphill, expensive, and fraught with danger.
Meanwhile, Linux is here to stay and will be used by more and more companies for important business applications. Until your company installs a Linux desktop for you, I encourage you to do what I did. Take that old, slow computer in your closet, and a set of Red Hat disks (they come included in many Introduction to Red Hat books) and install Linux at home. It will seem like you just discovered an alternate universe. Have fun, but please dont ask me to be your Linux Help Desk!
Driving Diversity with the Integrated Research Model
October 16th 2024Ashley Moultrie, CCRP, senior director, DEI & community engagement, Javara discusses current trends and challenges with achieving greater diversity in clinical trials, how integrated research organizations are bringing care directly to patients, and more.
AI in Clinical Trials: A Long, But Promising Road Ahead
May 29th 2024Stephen Pyke, chief clinical data and digital officer, Parexel, discusses how AI can be used in clinical trials to streamline operational processes, the importance of collaboration and data sharing in advancing the use of technology, and more.