SCO – Attack or Not

OK, SCO was allegedly felled by a DDoS attack early on Wed., Dec. 10.

Now, there were (and are) a lot of postings on /. and Groklaw saying this was probably just smoke and mirrors on SCO’s part.

Well, a CAIDA report seems to confirm that SCO did experience an attack, and Netcraft graphs appear to support this, as well. With a few days of reporting under its belt, the Netcraft graphs show the site going down on three of the four days at what appears to be precisely the same time – which would indicate an attack that is scheduled to go off at a certain time on compromised machines.

OK, I began writing about is alleged attack a few days ago and then just decided to let it drop – basically, the message was going to be divided into two sections: Attack is real; attack is a fraud.

Each of the two sections would have questions and comments associated with them.

Since the attack appears legit, one half of my doc is unnecessary.

But it still leaves the following questions and comments:

  • The FTP server stayed up the entire time – and it is (by IP) on the same subnet (.12 is Web server, .13 is FTP). If the attack did consume the bandwidth, why was the FTP site fully accessible? Note: As I type this, SCO’s Web site is again down; the FTP site is still operational and zippy. And the IPs have not changed, so rule that out. So – again – this is not a bandwidth issue. Has SCO just pulled the Web box off the Internet?
  • SCO Spokesman Blake Stowell says the attack knocked out there intranet, as well. Why? Sure, users inside SCO wouldn’t be able to get out if all bandwidth was consumed, but the intranet should be separated from the Internet with a DMZ, so the intranet should still work. Unless they are real amateurs, which – as an OS company – they shouldn’t be.
  • This has happened before – why didn’t they harden their servers? (The whole SYN-cookies issue has been widely discussed). Again, this is an OS company, not a pet shop’s Web site.
  • According to SCO, they got hit with a SYN attack. While they may (or may not) have had bandwidth sucked dry, a SYN attack is an old and basically uninteresting attack. Easily defended against. Why didn’t they do it?
  • When the site came back, it appears to be different. Which led many to say the attack was actually an update gone bad. But if the attack is truth, why different content and HTML type (one Groklaw poster reports new XHTML code)? Either they didn’t have good backups (again, they look like amateurs), or they decided to take this opportunity – off-line server – to roll out changes. The latter doesn’t make a lot of sense to me – why roll out new code before you get the first job (getting the current site back) done? Again, this is amateur hour…or something else is going on. Just doesn’t look kosher.

Don’t get me wrong – if SCO was attacked, I don’t condone it. Virtually everyone feels that attacks are stupid, conter-productive and just plain wrong, regardless of the target.

But SCO should didn’t come out of this looking any better to the tech community; they looked like newbies who had been given root.

Arrogance Demo

“If a company wants code, it’s the other party’s decision to provide that any way they feel like providing that.”

— SCO spokesman Blake Stowell, replying to IBM’s motion to compel discovery. SCO had supplied printed pages of code to IBM instead of electronic versions

SCO has asked the count to force IBM to turn over all 40 million lines of AIX – while the judge has not ruled on this yet, let’s pretend they are granted this request.

Is it OK for IBM to fax over the code – in 8pt type – so it’s virtually impossible to even OCR? Hey, it might be the way IBM feels like providing it…

This lawsuit has been a joke from Day One, but it’s hurting Linux and OSS.

Statements like this one by SCO execs show the company’s true colors: They don’t care what happens to Linux and so on. It’s just “show me the money!”.

It also demonstrates that SCO has not cards up its sleeves: Unfortunately, the wheels of justice grind slowly. The court date is current set for April 2005.

That is not a typo. Over a year from now…

Potentially, another year+ of FUD and loathing from Lindon, Utah.

Virtual Goodness

Apache.

Virtual hosts.

Need I say more? How slick is that?

I’ve been running Netscape (nee iPlanet…or is it SunOne now??) Server on my local Windows box for years, simply because I was working at a place that ran Netscape (on Solaris, however). Seemed to make sense to keep the same tools at work and home – I’m a big supporter of using the same tools.

Since I left that place, I still run Netscape just due to inertia. It works, does everything I need. No need to mess with anything else.

On my Linux boxes, of course, I run Apache (1.23, not v2.x). [UPDATE: Of course I mean 1.3.23….yeah, I’m number challenged….]

Every time I set up a virtual host on those boxes – and I only recently learned how to do this, simply because I never had a need for it – I’m amazed. So simple. So flexible. So sensible.

And for the all the IIS snobs out there (why would you be?), no, there are no icons to use to set up the virtual host. Just plain English words.

Me likes.

Two Portents of the Future Web

In a mix of both good and bad news, the sites jennicam.org and Blogshares have closed down (the latter) or will be shortly (the former; 12/31/2003). These are each pioneering sites in their own way.

The good news is that these two folks – basically, in each case, one person – brought something fairly new/fresh to the Net (or took a concept to the next level). It was embraced, widely copied, and ultimately brought down by its own success. The latter point, especially, points to the high penetration the Net is getting.

The bad new is similar, ironically: The high penetration of the Web is bringing down sites (the slashdot effect) and creating unrealistic transfer rates for the average dude with a concept. Back in 1996 when the Jennicam went up, it was a cool idea (I’ve no idea if hers was the first – doubt it) and it brought traffic. But not enough to crash her server or what have you.

So the real downside here is that it is looking harder for an individual with an idea – be it a guy in London with a blog or a woman with a webcam – to make something successful that will last.

As soon as it gets popular enough to, say, get enough traffic to pay the bills (either via PayPal contributions or ads), the traffic gets high enough to cost more and require more site maintenance.

So, are we seeing the end of the popular independent sites? I don’t know, but I hope not…

Two Approaches to Software Dominance

There’s been a lot of chatter on blogs recently about Microsoft’s next OS, Longhorn. I’ll spare you a list of pertinent links because that’s not what I’m really here to talk about.

I’m here to talk about two approaches to software dominance. Specifically, MS and Longhorn vs. Sun and Java.

I didn’t really make this connection – a tenuous one, at best – until last night, but it’s there.

And this is not a conspiracy theory entry, it’s just some ruminations.

Most readers should know enough about the Sun/Java saga to get the gist of what I say below, with Longhorn, this brief comment will suffice (and it’s a vast under- and overstatement): With Longhorn, MS is going to offer a richer user experience, one that will extend to a richer Web experience. However, this richer Web experience will only be available to users running Longhorn.

So, bottom line:

  • Sun tried to make something that would run anywhere (write once, run everywhere mantra).
  • MS is purporting to offer a richer experience, but only if you run Longhorn.

Diametric opposites. One is extend and embrace; the other is to ignore conventions and standards and build something “better” on this single foundation.

This is going to get interesting.

As much as I’d love to see a world that’s all Longhorn, let’s be realistic: that won’t happen. So, there’s no way that I’m going to say to a Web developer “give up HTML and go with XAML.”

But, you will see some business build two sites: one in HTML and one in XAML. Why? Because they’ll be able to offer their customers experiences that are impossible to deliver in HTML. Imagine if Amazon could sell 10% more stuff to a Longhorn customer than an HTML one.

— Robert Scoble, The Scobleizer Weblog

And – while I understand MS’s intentions(?) and all that, I still shudder when I read that quote: We – web designers/developers just recently emerged from the build-multiple-iterations-of-site morass with the now widely (to a degree…agreed) adopted Web standards (CSS and such).

I don’t want to build two versions of every site again….

Icon Obsolescence?

I’m a big fan of using keystrokes – as opposed to mouse clicks – even in programs that are set up for GUI use. Just faster and more intelligent.

It makes me nuts when I see someone, for example, typing in a text program (Word, HomeSite, whatever) and then taking their hand off the keyboard, reaching for the mouse, and running the cursor up to the application menu and clicking the “Save” icon.

What’s the matter with Cntr-S? (Or Command-S on Macs)

I actually caught myself doing this the other day (bad!), and it made we think about that Save icon.

Does this icon – which appears (in some form) in most applications, regardless of OS – even make sense today? It’s a floppy disk. For users of iMacs, for example, this will be really weird: There is no floppy disk. And Dell is starting to drop them as a standard feature on their desktops; most/many notebooks have dropped them to some degree. For example, my three-year-old ThinkPad has a floppy drive, but it’s an external drive (and I virtually never use it).

My guess is that we are stuck with this icon for some time, as it means Save – most users get that.

But it’s kind of weird….in not too many years, newbies will be able to find this icon to save things, but they probably won’t know why the icon looks like that.

It will be another one of those oddball icons that you recognize because you’ve been – explicitly or by discovering it – exposed to it as such.

Redesign In Progress

No, no Web site is ever finished in any true sense. Stabilized, perhaps, but always undergoing tweaking and enhancements, visible or behind the scenes.

This blog is a Web site.

Ergo, it ain’t done.

Actually, I’m in the process of a major redesign of the site, which I hope to launch in the next week or so.

Why change?

  • With this redesign, I’m divorcing my blog from my Littleghost.com site. While my blog is hosted there, it has little to do with the rest of the site. It stands alone content wise; the same should be true of the content.
  • I wanted to experiment with the use of themes/alternative style sheets. The blog redesign was an excellent excuse to do so. (Pretty cool, actually).
  • I keep thinking that – at some point – I will jettison Blogger and either move to Moveable Type or (very likely) roll my own solution. The first two bullet points are additional steps in this direction.

Bulletins as events warrant…

The Charlie Brown Connection…

I watched Robert Altman’s Gosford Park this past weekend.

As Queen Victoria would have said, I was not amused.

There were some good points and some bad points to the movie, but by far the worst was the dialog. Altman is famous for his layered dialog – where everyone, just like in real (not reel) life, talks at once.

That’s hard to follow sometimes, but it has an effect that is compelling at times.

However, with Gosford Park, it make things almost unbearable because of the content of the dialog. Example follows:

Lady No. 1: “Muffle whicca stuuf posip? Today?”

Lady No. 2: “Oh, no. Jespea mummer slamca.”

Servant Girl: “Yes, my Lady. Shall I wruther posip?”

Who the hell was the dialog coach? The same one used for the adult characters in the Charlie Brown cartoons?

I didn’t know what was going on half the time just because of this – and there are a lot of characters to keep track of: Tough to figure out the motives of this or that person if the person’s name appears to be “Shoefulllop” or “Droplegrmmgh.”

Ah well, what do I know? Nominated for movie of the year (2002). So much for my career as movie critic…

Why SVG is Doomed

I write this more as a prognostication/plea than as a fact.

I hope SVG is not doomed.

But I think it is. (Here’s a nice historical overview of SVG [note: PDF] ).

Here’s why:

  • SVG has been around since 1998 as a working draft (to get around the limitations of PostScript for Web use). That’s over five years. Half the Web’s life.
  • 99% of average users – including I would guess ~75+% of Web developers/designers – have never heard of SVG (ouch!)
  • 75+% of average users – including I would guess ~99% of Web developers/designers – have heard of Flash (another vector-graphic tool)
  • Flash comes with most browsers; installation for those without is straightforward
  • Few users – a statistically insignificant percentage – have the SVG plug-in
  • You can buy (Macromedia) Flash-building tools. Robust, well-accepted and integrated tools
  • SVG requires hand-coding in many cases; few programs output native SVG code (With the sole exception of Adobe Illustrator, no major programs)
  • SVG’s acceptance and market penetration is following the same trajectory as VRML – yes, the forgotten VRML…
  • MS’s Longhorn (as currently planned) is going to come with its own proprietary vector-graphic tools. Why invest in making/building tools/app for SVG when MS is poised to crush them/it?

Hey, I’m not happy about this, but let’s be real about it. It does not look good.

Where DO You Want to Go Today/Tomorrow?

I read an interesting article the other day/week, and for the life of me I can’t find the source, but that’s not really that important: The article was musing about software bloat an all that.

Basically, the author said he/she installed a year’s old version of VisiCalc on a new machine and…guess what? Pretty much the same as today’s MS Excel.

I’ve been thinking about that, especially in relation to the revolution some are trying to put into place on the desktop: Apple’s OS X, Linux on the Desktop, MS’s Longhorn strategy.

How much power/many features do we need?

For what uses do we need them?

Caveat: I’m not a gamer, so all the comments will be for non-gaming uses. I know gaming requires a different set of requirements and so on from normal/power use, so … you have been warned.

Basically, I’ve always maintained that processor speed is not that big a deal – yes, faster is better, but I always help people buy computers by keeping processor speed lower than the latest and greatest and forwarding the funds saved to RAM upgrades.

That’s where you’re going to see some performance bumps.

For example, the computer I type this on – a Windoze box – is almost three years old. It has a now decidedly mid- to low-end processor: 1Ghz. At the time, I think it was the fastest I could get (when I buy computers for myself, I splurge…).

However, I did put a lot of RAM into it – 512M. While that’s not a bunch today for power users (1G would be the norm), it’s still – three years later – more than the average user gets (that’s around 256M, I think).

I’m not an average user, but the 512 is doing well for me.

Moral: Get RAM, not GHz.

As far as applications go, there is room for a lot of users – not most, but a lot – to make use of a lot of the so-called bloat features of applications go. Most users are pretty much content with using the basic Office products (usually just Word and Excel) without macros, maybe an occasional embedded picture (Word), or macro or chart (Excel).

Business users, of course, run both programs a little harder…but with them we’ve covered about 95% of users out there.

At least 95%.

That leaves numb-nuts like myself and other blogger/geeks. The extra horsepower is appreciated. Hell, I’m running a half-dozens servers (Web, database, FTP..) on this one box, and all is fine.

If – magically – the processor started clocking at 3GHz tomorrow, I’d certainly notice the difference, but I don’t really see many hangs here.

Moral: The power that’s been around for several years is good enough for at least 95% of today’s users.

OK, but I do see occasional hangs on my machines, and this is where the extra features/new apps and horsepower on new machines can come in handy: For the most part, this is is due to graphics.

For example, I’ve given my machine’s specs (1GHz, 512RAM) and note that it runs Win2000 – a WinNT lineage.

I have another machine in the house currently running Windows ME, and it’s only a 500MHz, 128RAM machine.

Yet Adobe’s Photoshop (v5.5 – a now older version) runs way faster – and opens wayyyy quicker on the lower-end machine. (Both are Dell Dimension/Intel boxes).

NT is just not optimized for graphics; it does better with data – running SQL Server or Apache/PHP or what have you. Not graphics.

Windows XP, from the little I’ve worked with it, is as sluggish with graphics as the other NTs I’ve run, as expected. The Win9x line appears to be optimized better for graphics, at least it seems that way to me.

Apple is the same way – it handles graphical data better than thread-based data. Again, to me. And I really can’t speak for DBs and so forth on OS X. I just don’t have the experience.

Here is where the new boxes are a big help – yesterday’s boxes just can’t handle some of the movie, music/sound, and graphic apps that are out today. Or, they could…but you would not want to be the one to be using them on such.

Run a million-year-old version of VisiCalc on an old box? No problemo.

Run a movie-editing program on an old box? No thanks….

Moral: Some new applications require new hardware; suck it up and take the padlock off your wallet.

On the other hand, there are the applications/OSes that do require new hardware that really shouldn’t. Take the VisiCalc vs. Excel comparison. Excel does do much more than VisiCalc, but – beyond graphs and macros – a lot of it is either barely used or just so much fluff.

I guess the best example of fluff is using Word as a desktop publishing tool. While some people love this (for whatever reason; mainly because they’ve never heard of QuarkXpress or simply because they have Word and not Quark), this is extending the tool a bit too far, for my taste.

When I need to do layout (print), I use Quark or PDF, not Word.

On the other hand (there’s always another hand), Word as desktop-publishing tool is probably the easiest way to share such data for both businesses and personal use. Everyone has Word. Power in standards.

Moral: Like it or not, bloat will continue, and sometimes there is a good reason for it. However, the average user doesn’t understand the ramifications of being able to, for example, embed a picture in Word (bloat; need for new hardware). App makers do ($$$$). And Intel/Dell (Delltel?) don’t mind, either…

So what’s the bottom line here??

I guess the bottom-line – the Uber-moral – is threefold:

  • There is really little need for the bloat in current apps
  • People will embrace the bloat, anyway, requiring more robust hardware
  • Some apps require more robust hardware; get over it

That’s damn depressing…