Reflections on Scripting Languages

The Web continues to evolve, and with it the purpose and power of Web scripting languages. I’ve blogged about scripting languages – many times, RE: many issues – in the past; with the rise of .Net and the imminent release of PHP 5.0, perhaps it’s time to take another look at these languages.

Outside of the compiled vs. interpreted language differentiation, I don’t know what the best definition of Web scripting languages is. For example, I consider Perl a Web scripting language (one of it’s many uses), yet Perl is compiled – at run time.

Oh well. Consider this a look at the following Web scripting languages:

  • ASP
  • ColdFusion
  • JSP
  • Perl
  • PHP

There are other contenders – such as Lasso, a horrible, horrible language – but I will stick those in the preceding list. And I leave out the templating systems, except as they may pertain to a specific language (such as smarty on PHP).

All opinions expressed (except as linked, obviously) are my opinions; all errors are mine, as well.

ASP

First of all, let me begin by saying that I haven’t done any professional ASP coding – stuff I’ve been paid for – and I don’t have any experience with ASP.Net or whatever the fuck it’s called. So ignore the following if you’re already rolling your eyes.

That said, when I taught myself ASP, I didn’t build a HELLO WORLD page – I built a password-protected, session-enabled CMS that had a visitor section and admin section (add, edit, soft-deleted users and articles etc). So I don’t have a lot of experience, but I know a bit about scripting languages.

When I first began work with ASP, I was surprised that it was so well thought out. When I needed a function – based upon my work in other languages or it just seemed like it should be there, it was.

On the other hand, VBScript just totally blows. I just don’t like it. I do have VB experience (again, minimal), but … so what? I would have vastly preferred to use JavaScript as the logic for the ASP pages (ASP allows this), as JS is such a strong language. However, this doesn’t make a lot of sense – the whole “When in Rome, do as the Romans do…” point of view. The default is VBScript, most (all?) ASP coders are familiar with VBScript and probably not as familiar with JS, and I think there is a setting you may have to make on IIS to use JavaScript (could be just a default change; I can’t recall).

There is a lot going for ASP, however, and – as mentioned – I don’t know the whole ASP.Net framework, and I’ll bet that’s even better than the base ASP language. Also – very importantly – if you’re a dedicated MS house, ASP is probably the best bet: ASP is more tightly integrated into the MS framework than any other scripting language (leaving aside the debate if that’s a good thing or not, OK?).

Bottom Line Pros:

  • Tight integration with MS systems
  • Supported natively on MS’s IIS (no extra costs)
  • Excellent session control
  • Vast library of functions and other tools
  • Can call COM and COM++ objects

Bottom Line Cons:

  • Running on non-MS products (i.e. *NIX) requires third-party products (ChiliSoft, etc)
  • Native on IIS – the overwhelming choice of deployment – and IIS is a relatively porous server
  • Logic language – VBScript – is weak
  • User-defined function ability appears limited to COM objects

ColdFusion

This is probably the language that I’m most familiar with – I’ve just worked at more jobs where this was the language of choice. Not necessarily an endorsement; not necessarily a ding. Reality. So be it.

ColdFusion – originally a product of Allaire, purchased around 2001-2 by Macromedia – is probably the poster child for simplified dynamic Web development. While other languages may scale better or support more C-type functions better, ColdFusion is the easiest language to hook up to a database and template out a dynamic site.

ColdFusion has excellent – almost transparent – database interaction features. It also makes handling sessions – another huge Web-language issue – relatively trivial. Combined with a tag-based format that reads as English, this makes ColdFusion an ideal language to allow newbies to begin experimenting with database-driven sites.

Not surprisingly, the strengths of ColdFusion also work to its detriment: Simplification reduces the ability to do the complex (at least as proportionately easily). One example of this is the most basic: Because CF is so easy to code, there are a lot of individuals out there who are (shiver!) running CF sites who really don’t have programming chops. CF allows this – double-edged sword and all that. It’s hard to find a Java programmer who doesn’t know some basic best practices (uh, say separation of logic and presentation as much as possible). Most CF coders don’t understand this basic concept.

Bottom Line Pros:

  • Easy to use; easy to understand
  • The fastest way – such as for demos, proof of concepts – to get a dynamic Web site up and running
  • Tag-based language makes sense for HTML coders; database access is reduced to simple SQL and asking what is in result row X for column Y. Very transparent for the uninitiated
  • Runs – as third-party app – on almost every platform/configuration

Bottom Line Cons:

  • Simplification at the expense – at times – of the ability to do the complex
  • Requires third-party product (i.e. ColdFusion) to be installed
  • The latest release (no longer numbers ) – ColdFusion MX – is now rewritten on a Java core, which is good and bad: Good, as it allows Java programmers to get access to way more info/extend further; bad, because the regular CF programmer is not Java ready
  • There have been changes that create havoc – for example, v5 introduced the CFGRAPH tag. As of version 6 (MX), the tag is deprecated and is now CFCONTENT, I believe. Deprecated after one release. Doesn’t inspire confidence – will the code I write today work tomorrow??

JSP (Java Server Pages)

As with ASP, I have limited experience with JSP – mainly to teach myself how they work and all that.

There really isn’t a whole lot to say about JSP except the following, which will double as the Pros and Cons of the scripting language:

  • JSP seemed to first appear as a defense against ASP; this defense – leveraging Java in a scripting language, while clunky, worked well
  • JSP suffers from the same problems as Java: Compiled (JSPs are compiled upon first hit [slow]; zippy afterwards); complexity (for scripting developers, OO is hard); requires the whole Java infrastructure to be in place – coding and server – to be used. Often daunting. Sun has fucked up really badly on this, in my (inane) opinion
  • JSP benefits from the same strengths as Java: Many functions, robust OO infrastructure, large support network. And where ever Java goes, so will JSP. So it’s not a static language.
  • Often a third-party product (sometimes free; sometimes not) to run JSPs on a given server: Tomcat, JBoss, any of a handful of servlet handlers and so on.

Note: Russell Beattie has an interesting blog entry about JSP. Kind of a State of JSP entry: The good, the bad, and the fugly…

Perl

Before it was simple – or practical – to create a database-driven site, there was Perl. Perl CGIs, along with Unix include files (remember .shtml?), were the dynamic Web.

Today, Perl-driven sites are dwindling; many that are left (such as Slashdot) are holdovers from when Perl was the only way to do things. If these holdovers were launched today, they’d be in one of the other four languages described here, in all likelihood.

Perl, for all it’s strengths, was not designed as a Web scripting language. It was just a simple leap to make it such: Perl excels at text handling/transformation. HTML is a ASCII-text language. 1 + 1 = A dynamic solution. However, the lack of a Web-centric foundation makes Perl somewhat awkward to work with for Web development, especially for code monkeys (as opposed to trained developers).

For example, most scripting languages come built with constructs to handle the basic HTML GET and POST parameters (and so on). Until the Perl CGI module(s) came along, handling these variables page to page (such as a registration form) required a developer to create a custom subroutine to parse and make these variables available. Doable, but not clean or consistent (the biggest drawback).

And while Perl does have database connection tools (through the Perl DBI), Perl’s forte – as mentioned above – is text handling, and is often used in conjunction with flat files (delimited TXT files) as a non-relational database. Perl rocks for such work.

Bottom Line Pros:

  • Ubiquitous. No matter what server you’re on (*NIX or Windows), Perl will be installed. So a user wanting a quick dynamic app – say a guest book – on their site can use/commission Perl and be confident that it’ll work, even if they move to another platform in the future.
  • There are tons of fully functional Perl scripts floating around that one can use/modify to get a site app up and running in a hurry
  • Probably the strongest language for handling any sort of text transformations (Python is supposed to be strong this way, as well. I’m just not familiar with it)
  • Very fast language
  • Incredibly powerful search and replace functions (RegEx etc)
  • Free – open source. And there is a strong open-source community behind Perl, creating new modules and so on
  • No matter the scripting language a developer knows, there is a better chance that this developer knows some Perl than any other second scripting language. It’s that ubiquitous

Bottom Line Cons:

  • Not designed as a Web scripting language, and it shows. This is a serious liability
  • While Perl is not too hard to pick up, it’s hard to master: A well written Perl script can look like a bunch of punctuation thrown up on a page.
  • RE: Preceding point – As a very non-English language, it can be hard to maintain, especially non-commented code
  • Hard language to really master well enough to do a whole site (well) in the language. This is not true of, say, ASP or PHP

PHP

PHP is currently my favorite language. It combines the Web-centric designs of ColdFusion and ASP with the robust text-handling ability of Perl to make a language that is not without its flaws, but one that is ideal for Web development.

PHP was designed from the get-go as a Web scripting language: PHP originally stood for Personal Home Page (today, PHP = PHP: Hypertext Preprocessor…yeah, just rolls off the tongue…).

As mentioned, it combines the best of many languages, including Java, into its framework. With the C-like syntax and expected higher-level functions (example: all the math functions), PHP can – out of the box – handle almost any Web task needed.

And with a little extra effort/expertise, PHP can be tweaked with non-standard options (such as ClibPDF) to handle virtually anything you can throw at it.

I thought that PHP would die out – be drowned out, if you will – in the tidal wave of (somewhat) proprietary scripting languages (ASP on Windows side, JSP on *NIX side). I was quite wrong. PHP seems to have grown in importance and visibility. It’s interesting.

One major downside of PHP is the way the language keeps changing: Moving forward is OK, but it seems like everytime I look up a function (at the great online resource php.net), there is a note limiting it’s use: (PHP 4 >= 4.1.0).

The leap from v3.x to v4.x was huge and to leave v3 users behind was a good more. But the mess is that there are a lot of v4 functions and so on that don’t work unless you have version 4.x.y, which is somewhat problematic. At it frightens me to see what the new version – 5 – will bring (currently a release candidate; so almost there).

Bottom Line Pros:

  • It’s designed and built out as a scripting language. While some are extending its use as a shell-scripting language, it’s a Web scripting language first. This is a enormous plus
  • Steals – uh, leverages – the best of other languages (C, Java, Perl) so the syntax/structures are familiar to coders and just about every function/structure needed is available
  • Rapid coding is possible. Java/JSP may be more robust(?), but PHP is much faster to get up and running. Only ColdFusion is faster in this respect
  • Fast language, scales well, runs on *NIX and Windows
  • Open source; lots of contributions; the language continues to advance

Bottom Line Cons:

  • The updates don’t seem too well thought out – lots of updates/additions that require newest/newer version of PHP
  • Trying too hard to be all things. For example, the function disk_free_space() is the same is the function diskfreespace() (the latter is an alias). Will one be deprecated in future releases? Which one? Did I pick the wrong horse (alias)? Also a maintenance issue.
  • It is an open-source product, so there is no company behind it guaranteeing its future (such as MS behind ASP)
  • Reminiscent of Perl in many ways, but without the bare-bones structure of Perl (which is also a good thing, as Perl can be so damn punctuation heavy)
  • Pet peeve: The array variable – same as regular variable. In Perl, an array called foo is identified as @foo; in PHP, it’s $foo (same as non-array variable). Yes, there are functions to show the difference, but – from a readability/maintenance standpoint, the Perl notation is preferable, to me.

Conclusions

The first thing to take from this (very rough) comparison is a given that I approached this exercise with: Languages are not right or wrong, they are just potentially convenient.

The best car? For what? Road rally or taking 12 kids to soccer practice?

Ditto for languages.

That said, some rules of thumb for choosing/using a scripting language:

Scripting Language Choice(s) – Rules of Thumb:

  • Yes, it’s hard, but do some research to determine what will work best for your now (basically most of the scriping languages listed above) and in the near future (harder; hints below)
  • Of the listed languages, only ColdFusion may expire: ASP has MS behind it; JSP Sun/the Java Community; Perl…too pervasive and useful to die (not grow? Possible: Doubtful, but will still rock); PHP is OSS, and it has struck a nerve with developers.
  • ASP is great for MS-only shops with a lot of VB or C++ experience, so you can tie in COM objects
  • Java is THE server language for non-MS shops; JSP is (sometimes logical) face to beans/servlets and so on. As long as Java is around, so will JSP
  • Perl is not a Web scripting language (at foundation level). I will always (?) use Perl, primarily for scripting, log file transformation and so on, but I don’t want to do a full Perl site. But that’s me.

Economy of Scale (NOT!)

There are a lot of important advancements that take the status quo and massage it. For example, see the picture at right: A large (4G or so) hard drive that’s coin sized. Wow.

On the other hand, consider the same drive from an evolutionary standpoint.

It’s a Winchester Drive (I wish I knew the origin of this phrase; I don’t. Who/what is Winchester?). These have been used since the 1980s (at least); each year, the drives get smaller and the controllers get smarter.

When I first worked with such a storage device, it was for a library computer the size of a conventional desk, and the drives were Winchester but sorta (?) swappable. Each Monday, the two large platter drives (the size of deli platters) were rotated, and one taken off-site (early “back up”).

Today, smaller, denser, more efficient (pixie dust and all that)…but the same as yesterday.

Today, hard drives are evolutionary.

We need storage medium that is revolutionary.

Flash RAM and so on. Moving parts do not make it in a solid state world.

I love what Toshiba is doing. But would it not be better without moving parts??

Jobs As Commodities

There are a lot of issues surrounding the meta-issue that falls under the rubic of Overseas Tech Industry Outsourcing, but one issue in particular strikes me.

Outsourced jobs – overseas or not – are analagous to the current battle (of sorts) over software.

On one hand, there are the Microsofts and Oracles that are trying – in some cases, desperately – to maintain the status quo: Software is a proprietary product; binary-only distrubution and so on.

On the other hand, there are the RMSs and ERSs and Linuses (Lini?) who are pushing for more open ways to develop software, to break down the patent walls.

You can make a case for either side, but it is hard to disagree that the momentum is currently on the side of the open-source software folks. This is also the side that sees software as a commodity, much like Intel boxes (hardware) are today.

The entire issue of outsourced tech work is the same concept, to a degree. Basically, there is some company or group of individuals who architect things in all cases (hardware, software or software projects) and then others – without the need for higher skills or vision (though they may possess both) – put the things together.

Dell architects a PC, slaps it together from parts for Nvida, Intel and so on.

Linus or a kernel manager creates the Linux version, others contribute this or that little piece or extension to same.

Company X has this software project that needs App A to talk to App B; the architecture/scope is done by a small group and then the 1 million lines of code needed are done by groups…who cares where?

Why should Dell have to make RAM chips?

Why shouldn’t Linux get a print driver for an obscure printer from someone who just wants to do it for kicks?

Why shouldn’t economy of scale work/basic economic reality drive coding? Just because the company is in India doesn’t mean those workers can’t do defined Java tasks as well as American workers.

Fortunately, I have still seen relatively little pushbacks by the tech talkers (i.e. bloggers and tech columnists) about the tech outsourcing.

I have, of course, heard a lot about this from the talking heads who are in political office or who want to get there. To a degree this is understandable – hey, jobs are hard to come by here, so let’s keep what we can – but it can’t ignore reality.

The tech industry can’t afford to fall for the empty rhetoric that is fueling the press releases of the MPAA and (especially) the RIAA. Times have changed, and even if it impacts the industry in a way that you (or you…or you…) don’t like, get over it. The genie is out of the bottle. Work with it or – ultimately – be left behind.

Tools of the Trade

I’ve spent the last eight or so years (has it really been that long?) doing Web development, and each year I get deeper and deeper into the actual hard-core guts of development. You know the drill, from simple presentation (HTML) through dynamic scripting languages/database access to the tools that actually either create parts of a site remotely (not on a POST or GET request) or tools to help manage/maintain the sites.

While most of my work has been with such types of Web development – and I see that staying that way – it’s sometimes daunting to see the tools that a needed to be an even baseline competent Webmonkey.

  • HTML
  • CSS
  • JavaScript
  • DHTML (CSS + JavaScript)
  • At least one, preferably two scripting languages: ASP, ColdFusion, JSP, Perl/CGI, PHP
  • Today, RSS knowledge is important (which implies/requires some inkling of XML)
  • Some SQL (or forget dynamic sites, unless it’s tool based)
  • Rudimentary (at least) understanding of PhotoShop/graphics production

And this is ignoring all the tools (HomeSite, WebTrends) one may use and protocols that one needs to be at least unconsciously aware of (FTP, HTTP, HTTPS, telnet/SSH).

This is a broad range of skills: It’s a combo of writing (if not for edits, but for error messages and so on), coding, graphics and integration chops.

That’s a lot for the basics.

More hard-core programming – say, a Unix C programmer – needs to know C, Unix and some socket stuff or what have you. Harder to learn, harder to get better at (IMHO), but – overall – a much narrower range of skills. A C programmer, for the most part, has little concern about graphics or graphic design; Web developers do.

I’m not complaining, mind you – having new stuff to learn is great. While I’ll continue to get better at ever the very basic stuff (say, HTML) and still never run out of ways to improve there, it’s exciting to learn completely new technologies, such as RSS and XML.

Or SVG, CSS2, Python, Ruby, Mason…

Disruptive Technology – The Deep Web

The term Deep Web refers to the Web-accessible – but not currently Web-crawled – data out there. For the most part, this is databased information.

There’s a good – if light – article on Salon (free daily pass required) today about this issue, In search of the deep Web.

To say that this technology is disruptive is to put it mildly. A disruptive technology can be described (by me) as one that forces a change/changes in a highly disproportional way to its appearance, and its effect is just about impossible to gauge before the shift happens.

A classic disruptive technology is file-sharing of music. We all know the Napster/GNUtella/RIAA stories.

A even simpler example is the hyperlink: OK, this take you to another page. Cool. So what?

So what?

This link enables today’s search engines to do what they do – they keep following links and, in some cases (such as Google), use the number of linkings to a page to assign page ranks. A link – a public URL – allows anyone (and that’s key) to link directly to that page.

Your friends, your enemies, your competitors, search engines, the government…they all can link to your page.

Didn’t think that had so much power, did you? Well, maybe today you do, but did you back in the Netscape 1.1 days?

As is typical of a disruptive technology, the deep Web issue raises more questions than it answers/can answer. For example:

  • If deep Web diving becomes possible, what happens to the business models that have grown up around the proprietary data available only on Site X (examples: Orbitz, Amazon, etc)?
  • If the deep Web divers are the ones who can intelligently obtain and organize, well, everything, what is the role of the other sites? Think about it this way: If futureGoogle can scour the database(s) of futureEncyclopediaBritannica and futureEncyclopediaOthers, what is the need for the encyclopedia-centric sites? This is already happening to an alarming degree today and it’s just because of Google and a vast universe of Web sites; imagine the impact if that vast universe Google indexes contains the full Encyclopedia Britannica.
  • More importantly, if the deep Web divers are the ones who can intelligently obtain and organize, what happens when one company gets better at it than others? Does this company essentially own the Web (in the way Microsoft currently owns the desktop)? What are the ramifications of this?
  • As currently outlined, deep Web diving will require better crawlers, ones that can mimic human mouse clicks and so on so the source site will surrender its data. This raises a host of questions all by itself:
    • This will lead to a whole new class of ways to prevent such access, which may/may not impact the everyday user of, say, a stock market feed site. Right?
    • What if a company opens up their API (such as Amazon and Google have done, to a degree) so the deep Web problem is settled by the companies with the data? Will deep Web diving stop, or will it continue to get the companies that don’t open APIs? Will this deep diving then go to areas of, for example, Amazon that aren’t exposed by it’s APIs?
    • Isn’t this mimicking human interaction with the computer a form of a Turing Test? And – since this is currently not really well done anywhere today, why would anyone expect this to work for deep Web diving in the future?

  • Privacy/security issues. There are reasons some databases are protected – they contain patient HIV status, social security numbers, list CIA operatives and so on. How to differentiate between/protect these databases and leave others accessible to deep Web diving? And – if it’s possible to protect the privacy/security databases in a meaningful way (secure them, not laws to prosecute intruders after the fact), why wouldn’t any/most companies deploy the same technologies to protect what is near and dear to the company (KFC’s 11 secret ingredients…)?

For the answers to these and other questions, check back in about five years…

RSS Mess

I’ve been tweaking my RSS parser (yes, it’s home-grown) and it’s interesting – by rolling your own, you begin to appreciate what the real RSS aggregators etc. do.

RSS is a mess.

Why do some feeds have , others and so on?

Right now, there is probably more exception handling than in the real engine of the parser/display code.

While this is normal – coding is easy, error-trapping is hard/time-consuming – wasn’t the whole XML interchange (such as RSS) concept supposed to make all this easy?

Ah well, that’s how we learn..

RSS Feeds

I’ve added a page to this blog – RSS Feeds – that is not so much for anyone else but for me.

I’ve been tinkering with RSS and XML for over a year; I build the RSS that this site has by hand (it parses out the static index page and drops to RSS file every five minutes).

This is another step.

Basically, I build an RSS parser on my home box that grabs some feeds that I like at certain intervals, processes same, and uploads the results to the RSS Feeds page.

This is just an experiment to teach me how do to all this – no, I don’t want to be (and technically can’t) the next Technorati or what have you.

Basically, I want to learn how to use RSS feeds, process same and get results so I can, at some future date, embed a “recent headlines” area in a client’s Web site.

It’s another tool that I can wield; another way to leverage what is out there.

This feed section is strongly beta; here are the good and bad points of the section:

The Good:

  • It works! For all the caveats and so on that are listed below, it pretty much does as designed. Slick.
  • All processing happens locally and is then pushed to the remote (publically accessible) site. No database hits or what have you for the end user.
  • It was designed with extensibility in mind: Designed to not process a given RSS feed, but to process all feeds in an array – so I can keep adding/subtracting to the list and no code changes.
  • Using a simple JS function and CSS, I display the list of items without descriptions. A toggle is available to show/hide descriptions; defaults to no description (more headlines per inch). Note: Since JS is used, a page reload is not required. Very fast.
  • I cache feeds, so I don’t hit (just reprocesses local copies) any sites more frequently than every hour. During testing, I hit Slashdot too often, and I’m now under a 72-hour ban for RSS feeds. My bad.
  • Even on this first cut, the code is documented fairly well – it’s not alpha code – and has a handful of variables that can easily be transfered to a config file of sorts to alter the output. For example, I have a constraint on the number of listings from any given site (currently defaulted to 10). If the site offers more listings than the default, the default wins. If the site offers less listings than the default; the site listing’s count wins (duh!). But little things like that are good, especially this early in the process.

The Bad:

  • The processing code – all combined – is too much code. This calls this which includes that which writes to this which is FTP’d to there…and so on. First cut; code work. Now the challenge is to optimize.
  • Right now, it’s built in PHP, with a shell script for the cron. Should build the entire thing in either Perl or a shell script to make it faster.
  • Major character issues – Lot to learn there, but that’s part of the point, to get it as generic as possible so I can roll it out for any RSS feed and have it work.
  • I’d like to add a feature where each feed can have its own schedule – for example, I don’t care if I hit my own site more frequently than every hour. But right now, the global is one hour (I can set the global to any time), and I can’t override that value for any given feed – it’s all or none. In the future, this will be important: Some sites will allow more frequent updates, and that should be designed into this sort of app. Why not? Worst case scenario, I build in this functionality that almost never – or just never – is used. It’s there if needed.
  • As my feed list gets larger, I’ll probably have to create some sort of page-level navigation (drop-down form or bullet list) to take users down the page to the feed desired.

But first cut. Damn good for that, I think.

Lot of tweaks needed, but this is at the 80/20 mark already (80% of the functionality with 20% of the work…)

My, How You’ve Grown!

Three stories/events that have created a fair amount of buzz around the Blogosphere lately:

OSS Software is Too Hard for Non-Geeks to Use

Written by Eric S. Raymond, this rant was inspired by Raymond’s own efforts to print from a (Linux) computer to a printer connected to another Linux box on his own network.

Hilarity (not) ensued.

His basic message was that OSS software basically sucks when it comes to helping newbies (or, as in this case, experienced Unix Jocks). If people expect Linux to work on the desktop, Wizards and so on have to work as an average user would expect.

Bill Gates Stumps for More CS Majors

The richest man in the world made a tour of major CS colleges recently, trying to drum up interest in CS. Enrollments in CS have dipped over the last few years.

Dave Winer blames the MS juggernaut for killing interest by killing competition; MS’s Scoble, of course, disagrees.

Both have interesting things to say about this issue.

Search Wars intensify

Dan Gillmor takes Yahoo to the woodshed for hiding paid inclusion in newly released search; Jeremy Zawodny rebutts. Update: Tim Bray – someone who knows search – chimes in. Hint: He’s skeptical of Yahoo’s direction.

To me, one single thread runs through all these issues: Information Technology – which, more and more each day, is rapidly becoming inseparable from Internet Technology – is growing up.

These are all valid issues to bring up; these are all growing pains.

We’re going to see a lot more of these types of issues in the near future; some are going to be bloody battles. And not all will end well.

In a follow-up to his OSS Rant, Raymond published some letters he had received from users re: this subject.

One of the most interesting comments to this follow-up was posted by an anonymous reader:

Linux Idenity Crisis

I think the whole community needs to step back for a while and determine just what exactly Linux wants to be.

This whole premise of easy-to-use yet powerful software is flawed. A powerful tool necessarily involves some training for the user…Open source has always been about power and flexibility…If you want to serve the ends of power and flexibility, you cannot also serve the end of ignorant users. No other industry in the business of making powerful tools will dispute this fact…The real problem here is that Linux no longer knows what it wants to be. It wants to conquer the world somehow – to serve best the needs of both grandmothers on AOL and researchers at physics labs.

This user basically argues for keeping Linux complex so no power is sacrificed. Agree or disagree with what he writes, it’s a compelling question.

Where does Linux want to go today?

Bloggers Beware

The message in Steve Outing’s most recent column is that journalists who blog on their own time make their editors nervous.

There is a certain wisdom behind this nervousness – a paper is supposed to be impartial; a personal blog may well contain very subjective opinions that could (at least) appear to undermine that impartiality.

But isn’t it true – to varying degrees – that personal opinions of any employee of any company can help shape an outsider’s view of that given company? While the so-called damaging perception shift may be different for different careers – journalists are supposed to be impartial; Microsoft employees are supposed to be pro-technology – isn’t this a chance anyone with a personal blog or Web takes?

While I understand the desire of editors to control their writers – even personal writing – I don’t agree with it. And I really don’t see how such a policy is either fair nor realistic. The New York Times, for example, appears very anti-personal blog, according to Outing.

NYTimes.com Editor-in-Chief Len Apcar puts it bluntly: “I don’t like the concept of the personal blog in terms of The New York Times.”

– Reported by Steve Outing

Again, I see the editor’s point, but this seems a little unrealistic. You’re on your own time and – this is the part the editors don’t seem to grasp – if you blog stupidly, well, the blogger is at fault. Why did you hire this chucklehead in the first place? So it’s an issue of control, I guess.

The advice of a USAToday.com editor seems a little more realistic: “assume that you are always speaking publicly.” In other words, your blog’s on the Internet, it’s public, even it you don’t tell anyone about it. It’s not a diary that you lock in your drawer every night. So be aware of what you say and how this may affect you and your company.

While there understandably is a little resentment over an employer’s hold over what an employee does on the employee’s own time, the concern over personal blogs is really no different than concern over any other public action. Actions may have repercussions – if a blogger attempts to push the envelope, I fully support that; if the employer terminates that employee because of it, I can’t really fault the employer (obviously, this is on a case-by-case basis).

But I think it’s interesting that newspapers – a word-oriented world – are so frightened of personal blogs. It’s almost analogous to a totalitarian regime’s fear of the press.

On the other hand, newspaper editors – better than most people – know the power of public words. And that damage control over lose words – however well done and however faultless the associated parties – is just that: control of damage.

The control aspect of newspaper’s fear of blogs is just a way to contain potential damage before anything happens. I don’t fully agree, but I fully understand that aspect of their concern.

The other aspect is more troubling: Newspapers don’t seem to fully understand blogs and their potential strengths and weaknesses. That’s the most perculiar part.