WINDOWS NETWORKING: Doing It the Samba Way

By Steggy
BCS Technology Limited
Email: [email protected]
Web Site: http://www.bcstechnology.net

Odds are you have networked, or are interested in networking, the Windows-powered PC workstations in your location. I have to confess that despite my company's focus on UNIX and Linux, a substantial part of my billable time seems to involve Windows networks of one form or another. The fact is, the majority of businesses have at least a couple of PC's running Windows, which means Microsoft networking seems to be as unavoidable as death and taxes.

Before I get too far along, I should note that it is possible for a Macintosh to join in on the fun and pretend it's a Windows machine. Now I realize that doing so is a step down for a Mac, kind of like a Kentucky Derby contender rubbing withers with a bunch of uncouth male donkeys, but what's a computer to do? If the other guys can't speak your language then you have to speak theirs.

Anyhow, back to Windows networking. There are several ways to go about it:

Peer-to-peer. Such a network consists solely of Windows workstations linked together through an Ethernet hub or switch, wireless arrangement or a combination thereof (by "workstation," I mean a PC running Windows 95/98/ME/XP or Windows 2000 workstation edition). In fact, a minimal network can be cobbled together by connecting two PC's via a crossover patch cord -- no hub or switch is needed.

Homogeneous client/server. This is a mixture of Windows workstations and Windows servers. Users on the workstations log into a Windows NT/2000/2003 server, which is usually configured to host a Windows "domain." The server that controls the domain is called (surprise!) the primary domain controller or PDC. Think of the PDC as the "network police." On a small network, the PDC may well be the only server present and thus will also provide file and print services to the workstations. Since Windows servers don't scale very well and tend to stumble if asked to do too much at any given instant, it is customary in larger installations to divvy up duties amongst several servers, any of which could also be backup domain controllers or BDC. Note that a Windows domain has no relationship whatsoever to an Internet domain, such as this website.

Heterogeneous client/server. This is usually a conglomeration of Windows workstations and various kinds of servers, of which at least one will not be a Windows machine. The behavior is similar to a homogeneous network, except that it is likely the user will be logging into a Linux or UNIX server in order to be granted access, and will see UNIX or Linux resources show up in the "network neighborhood" on his/her PC. As with the homogeneous network, at least one server will assume the role of network police.

Of the three arranegements described above, peer-to-peer is generally the easiest to get up and running. Little technical knowledge is required -- there's no server to deal with -- and the defaults configured by Windows work in the majority of cases. The downside is that peer-to-peer is very insecure: it can be said that there are no unauthorized users on such a network. You certainly would not want to expose such a network to the Internet. (Actually, it is generally unwise to expose any Windows network to the Internet, see this article.) In my opinion, peer-to-peer is totally inappropriate in a business setting -- the lack of security in itself should be sufficient to convince anyone there's a better way. For that reason alone, my company does not install such systems. If a client insists on a peer-to-peer installation we! will politely suggest they seek h elp from another source.

That leaves client/server. Of the two types, the homogeneous network tends to be less demanding to set up and configure. This is because Microsoft has expended considerable effort in simplifying the networking of their workstation operating systems with their server operating systems. Rank amateurs can build a homogeneous Windows network with virtually no technical information, simply by "doing it by the (Microsoft) numbers."

The relative simplicity of setting up an all Windows network, along with the "if it's all Windows it'll work better" attitude that pervades much contemporary thinking, has caused many businesses to choose this approach without really considering the technical implications of doing so. As many of them soon discovered once the system was running, Microsoft expended far more effort on style than on substance. The result tends to be something analogous to a beautiful, multi-story, brick home boasting all sorts of modern conveniences -- but erected on a cracked foundation placed into an marsh that breeds malarial mosquitoes.

That leaves us with the heterogeneous network model. You may think that you have never encountered a heterogeneous network -- indeed, this may be the first time you have even heard of such a thing. You may not even know what heterogeneous means (hint: it's not about sex). Now, I'm not one who usually points out the obvious, but I'll make an exception: since you are reading these words, you are connected to a heterogeneous network, which we call the Internet. The Internet is the largest heterogeneous network in the known Solar System.

As one might expect, a heterogeneous Windows network is more technically demanding to configure than an equivalent homogeneous system. The reason is simply that what is being accomplished is getting disparate operating systems to converse in mutually acceptable ways. In my earlier animal analogy, I alluded that your UNIX server is a race horse and Windows workstations are er...jackasses. Therefore, it is necessary for UNIX to bray rather than whiny when on a Windows network -- that is, speak the lingua franca of Windows networking. So the heterogeneous network administrator will have to know more about his system than his Microsoft counterpart.

Now, before you get too discouraged and stop reading, I'd like to assure you that despite the extra initial effort, the resulting system will be most rewarding. You will achieve a degree of control over your network that most Windows jocks can only dream about. Also, you will enjoy a level of security and stability that is difficult to achieve in the Microsoft equivalent. In fact, some of the trouble experienced in a homogeneous Windows network will be eliminated by getting rid of the part that usually causes the most trouble: the Windows server. That's right: take that thing and send it to the nearest recycling center (or format the hard drive and load UNIX or linux on it). Fire up your Linux or UNIX host, make sure your TCP/IP is working properly and grab some SMB server software.

Some SMB what??? SMB (server message block) is the means by which Windows machines communicate on the network. In order for a non-Windows system to join in, it has to speak SMB -- bray instead of whiny. There are several ways to go about this in the UNIX or Linux environment. You can purchase commercial SMB software for your flavor of UNIX (e.g., FacetWin or VisionFS from Tarantella), for which you will pay a per seat licensing charge. Or you can install Samba, which may be freely downloaded.

I'm a strong advocate of Samba, not so much because of its zero per seat cost (which, of course, is a nice feature) but because of its technical excellence and adaptability. Samba does all that is needed to run a Windows network, including acting as the primary domain controller, without dragging along a lot of the old baggage that Windows has inherited over the years. Plus the Samba team takes performance, reliability and security very seriously; more seriously, I daresay, than does Microsoft. Samba has been continuously improved for some 10 years and when bugs are discovered, they are addressed in a timely fashion. As a bonus, you don't have to hand over money to get the bug fixes (those things that the Redmond crowd euphemistically refers to as "service packs").

Now, you may be asking yourself questions like, "Why does this Windows to UNIX thing have to be so complicated? How is it that something like Samba is free? And, isn't there something illegal about using Windows networking without a Windows server?" The short answers are: 1) Because Microsoft made it that way. 2) The best things in life usually are free. 3) Don't worry, bubba! The network police won't arrest you.

So take a close look at Samba and get out from under Bill Gates' thumb. If you are willing to learn a little something about Microsoft networking and are willing to invest some time to get more acquainted with your UNIX or Linux system, you will soon have one server running Samba and doing the work of numerous Windows servers. But, first a little history.

Microsoft Networking: How did it get so sloppy?

The current server edition of Windows (Win 2003) is the end result of many years of effort by Microsoft and others. It had its origins in the early 1980's when IBM and Intel got together and came up with a simple LAN for use with the then new PC technology. Microsoft introduced a crude networking add-on to the MS-DOS 3.0 operating system called MS-NET, which utilized the file request redirection mechanisms built into the IBM/Intel design (NetBIOS). MS-NET did work after a fashion, but its performance and reliability were certainly nothing to get excited over. In fact, I recall it as being pretty miserable in all respects. Be that as it may, MS-NET was the first instance where Microsoft had a formal method of exchanging data over a network of any kind. This was the beginning of SMB.

Not too long after the genesis of MS-NET, work commenced on the OS/2 windowing operating system, which was to be a collaborative effort of IBM and Microsoft. The plan was for OS/2 to fully support networking, as well as present a GUI (graphical user interface) that would be similar to that of the recently released Apple MacIntosh. Microsoft had also secretly started development on what would become Windows, hoping to quickly steal market share from the Mac. Windows 1.0 was the result. It probably resulted in more three-finger salutes being executed than any other DOS-based product in history. Soon Windows 386 followed and slowly but surely, the Redmond gang started getting stuff to work.

However, as the pace of development picked up, Microsoft's resources became overburdened in supporting both the OS/2 and Windows projects, as well as in maintaining the all-important MS-DOS cash cow amd the various productivity packages then in distribution. Something had to give. Also, one could suppose that Bill Gates didn't really want to have to share the fruits of his company's labor with IBM. So in 1990, OS/2 was abruptly abandoned and the freed-up resources were diverted to Windows development.

Meanwhile, work had been moving forward on a successor to MS-NET called LAN Manager (LANMAN), still MS-DOS based. As it turned out, Microsoft (in a now-familiar tactic) had "borrowed" some of the OS/2 networking technology to build LANMAN. Many thought LANMAN to be an improvement over MS-NET, even though it lacked adequate security features. Obviously, that weakness didn't bother Microsoft very much, because in 1992 they officiated at a shotgun wedding between Windows and LANMAN, resulting in a product called Windows For Workgroups (WWG). This was the first version of Windows that could be networked. As it turned out, WWG was the model for all future Microsoft networking: point and click to get to what was wanted, supported by a browse service running on one of the machines. Excavation had been started to create the Network Neighborhood.

WWG could only support a peer-to-peer network. This was fine for a small office with a few machines but unsuitable for anything larger where many resources had to be shared or reasonable security had to be maintained. Microsoft realized this and, sacrificing quality for expediency in yet another shotgun wedding, patched together a server version of LANMAN called NT Advanced Server Version 3.1, first released in August 1993. In the fall of 1994, the name "Windows NT" appeared for the first time, followed by Windows NT 3.51 in 1995 and Windows NT 4.0 in 1996. Windows NT 5.0, which was originally slated for release in 1998, finally appeared in 2000, and was renamed Windows 2000 (Windows 2003, of course, is the most recent descendent of NT).

NT Advanced Server 3.1 was anything but "advanced." In fact, it was nearly useless because it crashed with little provocation. However, with each new release and service pack, performance and stability gradually improved -- as did complexity and resource consumption (in terms of code size, Linux is to Windows 2003 server edition as a yacht is to a crude oil tanker). Although NT was no match for Novell's Netware, the then-dominant network operating system, Microsoft started gaining market share, mostly because the familiar Windows "point and click" environment helped win over both users and business managers. Many of the latter, not understanding the technical aspects of Microsoft homogeneous networking, ordered that the less expensive NT be installed on new servers instead of the higher priced but much more robust Netware.

Emboldened by such successes, Bill Gates publicly asserted (as did numerous "experts") that Windows NT would not only force out Netware but would shove aside UNIX. Gates' ultimate goal was "Windows on every desktop and Windows on every server." A timetable was even set: it was predicted that in 1996, Windows NT would surpass UNIX and both the latter and Netware would fade into obscurity.

It didn't work that way. The Windows client/server model proved to be fragile. Those who replaced Netware or UNIX with NT discovered that more powerful hardware was needed to maintain an equivalent level of performance. NT servers often crashed when service demands were high, a problem many companies were forced to resolve by installing more servers to bear the load. Bizarre network behavior would frequently cause workstation crashes for no apparent reason, making many system administrators familiar with the infamous "blue screen of death." Homogeneity produced a friendly environment for both users and viruses, the latter which could quickly spread from one workstation to the rest of the system and wreak havoc. Homogeneity also hampered efforts to exchange data with non-Microsoft systems, which proved to be a critical issue for many large corporations.

Microsoft reacted to these and other problems by constantly releasing service packs, all the while proclaiming that NT was capable of meeting any organization's network needs. However, the resolution of scalability, security and standards issues was slow in coming, partially because of the patchwork nature of the software; and also because Microsoft was fixated on its "Windows everywhere" quest and thus was focused more on marketing than on software engineering. Also, Microsoft had become interested in a new computing environment: the Internet. These factors combined to limit Microsoft's ability to get Windows NT up to the performance standards that were expected by businesses, thus opening the door for non-Microsoft solutions.

Open Source SMB

The support of Windows networking on UNIX got its accidental start in 1991 when Australia's Andrew Tridgell was experimenting with some networking software called DECNET, developed by the former Digital Equipment Corporation. By chance, Tridge (as he is widely known) discovered that DECNET was using a protocol that was compatible with Microsoft's LANMAN. It was his first encounter with SMB.

Being a hacker in the truest sense of the word, Tridge worked to adapt SMB to a UNIX system, hoping to produce some measure of Windows to UNIX connectivity. Since he had no access to the formal SMB specifications, he was forced to tediously reverse engineer the DECNET operation to determine how it worked. His efforts were successful and he called his new software Samba, a name that came about by running the letters SMB through the UNIX spelling dictionary.

Tridge's efforts were enhanced by the subsequent disclosure by Microsoft of the workings of SMB in a bid to get SMB adopted as an Internet file transfer standard (Microsoft even renamed SMB to CIFS, the Common Iternet File System, in an effort to garner support). Several commercial software developers released SMB on UNIX packages, using the published Microsoft information to help along the development process. However, all of these packages have since become irrelevant, thanks to Samba's widespread acceptance.

As Samba's popularity has grown, so has the number of individuals involved in its development. Samba is now a worldwide project, in which talented programmers have ported Samba to almost all Linux and UNIX distributions. The most recent release of Samba, version 3.0, has achieved feature parity with Windows 2003 in virtually every respect, yet is able to out-perform the Microsoft product on the same type of hardware.

OpenServer and Samba

In my opinion, there is little justification for purchasing and maintaining the proprietary Microsoft server package. There are very few situations where Samba cannot do the job as well as or better than Windows server software. Many of the qualities of Samba that make it ideal for large-scale Windows networks are there because it is a carefully engineered package and because it runs on UNIX (or Linux -- consider the two interchangeable for the sake of discussion). The latter means that Samba is able to take advantage of the secure execution environment and rugged file system that are the foundation of UNIX.

SCO packages Samba with OpenServer 5.0.7 -- check your Skunkworks CD -- and makes it available as a download for older OSR5 releases. If your SCO system is running VisionFS, which was supplied with OSR 5.0.6 and earlier, you can replace it with Samba and gain some functionality (among other things, VisionFS doesn't provide true PDC capabilities). You should know, however, that SCO's Samba package tends to be several releases behind the development curve. As of this writing, the latest Samba distribution available for download from SCO's FTP site is 2.2.6. Consider that the most recent Samba release is 3.0.0, and that you really should be running at least version 2.2.8a to take advantage of some fixes that were implemented to counteract vulnerabilities found in the older versions.

If you wish to install any Samba version after 2.2.6, you will have to download the source code and build your package using GNU tools such as gcc and make. I had no trouble in building and installing Samba 2.2.8a on my OSR5.0.6 server. You should be aware that some directory references in the roll-your-own version may not be the same as those compiled into the SCO package, although there are ways to make the installation conform to the SCO custom installation of Samba 2.2.6.

Briefly, to build from the source, first download and extract the source tree on to your machine. Be sure to check the GnuPG signatures for the source files to assure that you have not received a tainted copy. Next, change to the directory where you stored the source tree. In there, you will find a directory named source. Change to that directory and run ./configure -h | more to see the command line options that can be used tailor the software to your environment (e.g., what directories will be defined to store the Samba binaries). Since these options run to several screens you might wish to print ./configure -h. Run ./configure with the appropriate options and configure will build a makefile based upon various tests run against your system. You might wish to examine that file (its name is Makefile -- note the capital M) before going any farther, as it will include definitions that say where the binaries and such are to be stored. Edit these loc! ations if desired.

With the makefile prepared, say make clean;make and compilation will proceed. You will receive errors if any required libraries are missing from your system, which means you need to get cracking and install all the patches that have been issued by SCO. Once compilation has been completed, run make install to install the binaries and such into your system.

There is more to do, of course, before you can turn Samba loose. Like many other UNIX programs, Samba is controlled by a configuration file, smb.conf. Virtually everything that Samba does or doesn't do is determined by statements in smb.comf. Creating this file is a topic that is far beyond the scope of this or any other on-line article. I recommend that you obtain a copy of the O'Reilly book Using Samba and give it a good read as you proceed. Using Samba thoroughly discusses everything you need to know to implement a Samba-based Windows network, including related network issues that you may encounter. Appendix B is devoted to a blow-by-blow description of every option that is allowed in smb.conf. In my opinion, this book is an essential reference for anyone who plans to include Samba! in his or her network.

System Implications

Like everything else running on your server, Samba will demand the use of some percentage of the available resources, especially in the realm of networking and raw memory consumption. Each active Samba connection spawns a daemon to handle the communications with the client. If you have 100 clients in use, there will be 100 instances of Samba's smbd daemon running (plus a "master" daemon started at boot time). Obviously, each instance of smbd will consume memory, will compete for run time, and will compete for I/O access. Accordingly, your system must be adequately sized to handle the worst case load.

You very first consideration should be the capabilities of the server hardware. Now, it is entirely possible to run Samba on an older 486 box with 64 megs of RAM. However, such a machine would quickly slow down as more clients connected. Aside from the speed at which the 486 can actually execute machine instructions, the relatively small amount of memory would result in paging (swapping). Since disk access can be as much as 10,000 times slower than memory access, paging will result in a substantial negative effect on system performance. The best way to avoid paging is to equip the server with enough RAM to allow all processes to remain in core, even when sleeping. Given that memory at this time is relatively inexpensive, there is little excuse for not equipping your server with lots of RAM. Think of RAM as money in the bank: you'll never have too much! Oh yes, trade in that 486 for something a little faster if you don't want your Windows users grumbling about how slo! w the system runs. A nice AMD Ath lon Barton core processor running against a ViaTech KT400 chipset produces a very high level of performance at a reasonable cost (certainly less costly than an equivalent Pentium 4 combination).

The sharing of file system resources by Samba will increase the demands on the disk I/O subsystem. At the risk of incurring the wrath of some people, I'll assert right now that IDE hard drives have no business being in a Samba server. The ATA bus/IDE drive combination was designed to be cheap, not fast. IDE drives represent a major performance bottleneck, especially in the face of high volume concurrent disk requests, as would be anticipated in a Samba environment. If you are serious about having a high performance, Samba-based network, use SCSI drives in your server(s), and consider hardware based RAID 5.

Disk I/O performance can be greatly influenced by how much memory has been allocated to I/O buffers, a value controlled in the SCO OSR5 kernel by the NBUF tunable parameter. An unmodified OSR5 installation computes the "ideal" value for NBUF at boot time, often resulting in a value that is not appropriate for the loads carried by the server. It is better to manually tune NBUF so that between 5 and 10 percent of available memory is allocated to I/O buffers (each buffer is exactly 1024 bytes, the size of a UNIX disk block). Up to a point, more buffers tend to result in better apparent performance, as disk accesses are not as frequent. The downside of a large buffer space is that if a crash occurs, a substantial amount of data could be lost due to "dirty" buffers not having been written to disk. Also, with a large buffer space, the kernel will consume more CPU cycles managing buffers, an activity that actually hurts machine throughput.

You also need to consider the NHINODE kernel parameter, which controls the size of the inode hash table. When a file is opened, a copy of its inode has to be loaded from disk into memory. If a lot of files are opened, a lot of inodes have to be managed by the kernel. The kernel maintains a list of active inodes in a hash list, which must be searched each time an inode has to be updated in some way. Increasing NHINODE tends to improve the efficiency of the hash list, decreasing the amount of time the kernel will spent searching the inode table. Since a large Samba installation could result in hundreds of files being simultaneously opened, it may help performance by increasing NHINODE from its default value of 128 to at least 1024 or more. You should use powers of two (e.g., 2048, 4096, etc) to define this value. The maximum permissible value is 8192 and should be used if you are not sure just how heavily loaded the filesystem will become. The cost in memory is modest ! relative to the potential gain in performance on a busy server.

Samba also imposes substantial loads on the networking components of the OSR5 kernel. As a result, it may be necessary to tune some of the kernel STREAMS parameters to avoid performance degradation or runtime errors. It should be noted that running out of STREAMS resources can trigger serious kernel errors that might force you to reboot the machine. Therefore, it is essential that STREAMS be tuned to support the load imposed by Samba and other networking functions, such as DNS, DHCP, FTP, etc. What follows are some recommendations to get you started. You should periodically review STREAMS allocation statistics by using netstat -m and then make appropriate adjustments. Pay particular attention to the fail column -- you may see a trend that can be corrected before the system becomes unstable.

For the purposes of establishing a baseline, the following STREAMS parameters should be examined and changed as required:

NSTREAM Each network connection, whether set up by Samba or by another daemon (e.g., named), requires the use of a "stream head structure," which is assigned from a fixed sized pool determined by this parameter. Conceptually, a stream head is to a network connection as an inode entry is to an open file: it is a memory-resident data structure that comtains the particulars of the connection.

The default value for NSTREAM is 64, which is woefully inadequate for any system on which Samba is to be run. The absolute minimum safe value is 512. If anything, err on the side of caution. There's no harm in increasing NSTREAM to at least 2048 or more. If your system is very busy, often manifested by the occasional presence of "out of STREAMS" error messages in syslog and on the console screen, consider bumping this value to 8192 or even higher (the highest permissible value is 32,768). It is best to use values that are powers of two, such as 4096, 8192, 16,384, etc. A high NSTREAM value will not adversely affect memory availability to other processes.

If, after increasing NSTREAM to its maximum, you still get "out of STREAMS" type errors, you need to review the overall loading on your server. It is possible the server is being asked to handle too many concurrent network services.

NSTRPAGES This value controls how much of the available memory in pages (4096 bytes per page) may be used by the STREAMS components in the kernel. The default value of 500 (approximately 2 megabytes) is generally insufficient for all but the smallest installations. A good starting point is 2048, which will allow STREAMS to consume up to 8 megabytes for data structures, buffers and so forth. If you start to see entries in the fail column of the netstat -m output, increase this value to 4096 pages (which will allocate 16 megabytes). The maximum permissible value is 8000 pages, which might be needed on a heavily loaded system. In general, if you increase NSTREAM you should probably increase this parameter.

STRMAXBLK This value controls the maximum size of a STREAMS buffer. I recommend you set this to its maximum permissible value of 524,288, as lower values tend to reduce network I/O performance. The kernel will split up large buffers into smaller structures if warranted, so there is no real waste of memory caused by using the maximum permissible size.

NUMSP This value determines the number of streams pipe devices that will be available for network communication. Networking involves constant communication between various software and hardware layers, using pipes to pass data back and forth. Generally speaking, more pipes tend to result in more simultaneous paths for communications -- producing some performance increases. The default value of 64 should be bumped up to the permitted maximum of 256 to support the added load from Samba. I have noted that in some cases, an insufficient number of pipes can cause Samba processes to hang at odd intervals, giving the user the erroneous impression that his or her workstation has gone kaput.

In addition to tuning the kernel to support the load, take a careful look at the network hardware itself. Where possible, use Ethernet switches in place of hubs and if your system is really busy, consider splitting up your network into several subnets, each with its own switch and server network card. Windows networking is network-intensive and an inadequate network infrastructure can result in the server being blamed for performance bottlenecks that are not its fault.

Summary

The Windows network neighborhood is no longer a one horse town controlled by one entity. Your UNIX or Linux server can step in a bring a measure of performance and stability that may be sorely lacking. All you need is Samba, some technical information, and the willingness to learn some new concepts. You have nothing to lose and a lot to gain. Plus the money you save by not having to purchase Windows server licenses can be used for more useful purposes. I'm sure you'll think of something (if you can't figure out what to do with that extra cash, contact me and I'll try to help).



Got something to add? Send me email.





(OLDER) <- More Stuff -> (NEWER)    (NEWEST)   

Printer Friendly Version

-> -> WINDOWS NETWORKING: Doing It the Samba Way

1 comment



Increase ad revenue 50-250% with Ezoic


More Articles by © Steggy







Sun Dec 17 18:00:33 2006: 2761   BigDumbDinosaur


Also see (link) for more Samba configuration info.

------------------------
Kerio Samepage


Have you tried Searching this site?

Unix/Linux/Mac OS X support by phone, email or on-site: Support Rates

This is a Unix/Linux resource website. It contains technical articles about Unix, Linux and general computing related subjects, opinion, news, help files, how-to's, tutorials and more.

Contact us