The New Net | The Desktop as the Net?

Sure, change is inevitable – and, in most cases, a good thing.

How to make the distinction on what the change Microsoft’s Longhorn will make is a bit perplexing. (Read some of Tim Bray’s observations, or read a Microsoft Avalon article).

Things are changing more than just an OS change. This isn’t just a new OS, it’s a change on par with the DOS CLI to GUI that the original Macintosh introduced (to the masses – don’t flame me on the whole PARC history; I know…).

I don’t quite understand it all – I’ve read too little about it all to venture solid opinions – but it appears to bring the scripting capabilities of HTML to higher-level languages. However, the scripts (in the case of Avalon, XAML – an XML-based language, I guess) are merely wrappers for distinct classes in the API. So, much like a H3 tag in HTML represents – if you will – an API call to the browser’s rendering engine (really a parsing operation, but bear with me), the XAML is an API call to the actual OS.

Youch! That’s powerful.

And allows the representation of objects/text to be the same in applications or the browser (hey, calling the same API).

I’m going to have to look more closely at all this, and see just what the heck it means for those not on Longhorn. Then what happens?

This is bigger than I ever thought.

< A few minutes later >

I just finished the Avalon article, which included this conclusion:

Avalon and XAML represent a departure from Windows-based application programming of the past. In many ways, designing your application’s UI will be easier than it used to be and deploying it will be a snap. With a lightweight XAML markup for UI definition, Longhorn-based applications are the obvious next step in the convergence of the Web and desktop programming models, combining the best of both approaches.

— Charles Petzold, Create Real Apps Using New Code and Markup Model

Hey, I was on the money about the Net/desktop (apps) convergence concept. Scary….

The End of the Gallery

Again, end of the gallery, not the end of the galaxy.

No apocalypse now.

By end of the gallery, I mean I’ve finished up the backend of the gallery tool.

As outlined in my last entry, I decided to build a PHP/MySQL backend (uses Perl/flat files for front end). While it was a relatively straight-forward process, it was more work than I anticipated – isn’t it always?

And the [intentional] use of MySQL was a bit of a hindrance, but I wanted to use MySQL because it’s the predominant OSS DB out there, and I need more practice on it. And this project is pretty much a good fit for MySQL: Nothing too involved, just some selects and inserts. And all locally, so it’s a no-brainer (yes, perfect for me).

Here’s how the backend project ended up:

  • Add/edit gallery page (all at once)
  • Edit image name/desc (all at once)
  • Add new image/reload existing image (processes and moves file to local and remote server)
  • Gallery-to-Image mapping (gallery at a time)
  • Include file for header (menu/DB connectivity etc)
  • Processing page to generate all necessary TXT files for front end

I used the same CSS sheet as used for the front end (with some back-end classes additions tacked on), so the UI is the same and that’s one less file to maintain (good…).

As far as the database goes, it’s pretty much a trivial exercise – see the code below:


/*list of galleries*/

create table gallery (

gallery_id int primary key auto_increment,

gallery_file varchar(255),

gallery_name varchar(255),

gallery_desc text,

date_added datetime

)

/*image with captions*/

create table image (

image_id int primary key auto_increment,

image_file varchar(255),

image_name varchar(255),

image_desc text,

date_added datetime

)

/*mapping table, images to galleries*/

create table mapping (

image_id int null,

gallery_id int null

)

As you can see, three tables, the last of which is just a mapping table between the first two, so any picture can belong to any number of galleries.

Lots of busy work, but – for the most part – nothing earthshaking.

One of the nice aspects of this project was getting more experience with PHP and files – I’ve done it before, many times, but always separated by large chunks of time. A refresher is always nice.

Actually, it was a nice refresher in PHP, in general. I’ve been working more with Perl and ColdFusion recently, and I keep forgetting about how much I like PHP. And the more of it I learn, the more there is to like.

One new aspect of PHP – for me – was the FTP tools. I’d just never had the occasion to need them in PHP.

When I mentally architected this tool and decided on PHP, I didn’t even know if PHP supported FTP – I knew that it must, and that it probably wasn’t a hack, but I didn’t know. I just assumed that it did, and – if not – I’d just run exec() in PHP to either a shell or Perl script to do the FTP business.

Thankfully, PHP’s FTP tools are as I expected: Pretty extensive and pretty damn accessible.

The two complaints I have with PHP’s FTP functions are the following:

  • The syntax is always – GET or PUT – remote, local. I am used to – Unix based – source [space] target. I was hosed on this for about a half hour, until I actually RTFM. Little weird to me, but consistent across the PHP FTP functions, and consistency is good.
  • I’m probably missing something, but I don’t see support for MGET or MPUT – each GET or PUT is discrete, as far as I can tell (and, here, I have RTFM). Not a problem in this case for me, as I’m looping through galleries, creating them and uploading them. So it’s a one-at-a-time thing, anyway. But what if I wanted to upload all the JPEGs in a directory? I can’t do a “mput *.jpg .” type thing, as one can with most CLIs. Have to grab list and loop. OK, but still would be nice….maybe in v5

Overall, the Gallery Project was a blast, and it’s turned out well.

I need to do some tweaking – for example, build an FTP function for my MPUT-type needs – but it’s pretty solid and the damn thing actually works!

Time to scan in more pics….

Birth of the Gallery

No, no, no – put away the pointy-ear caps, you Trekkies: Birth of the Gallery, not Galaxy.

As I’ve mentioned, I’ve been incorporating some pictures of mine into this blog, including a random “Pic ‘0 the Day” (see left-hand column).

OK, so I had all these pics scanned in and uploaded, but … only one a day would appear.

Which was nice, but why not make a gallery of the pictures?

Better yet, how about multiple galleries – the pictures grouped by subject matter or what have you?

Yeah, why not?

So I worked on a method to get this working, and I have half of it done: the user-presentation layer.

Enter the Gallery, and feel free to browse around.

OK, as mentioned, I have only half the project done: the part posted. Since I’m on Blogger and run off their database (for text only, not other stuff), I have limitations.

And my host does not allow databases on my plan (didn’t allow them at all until just recently), and the scripting languages supported are thin: Basically, this is a job for Perl and flat files.

It all came together fairly easily; I’m surprised that it worked well. I built it remotely and uploaded it and it worked flawlessly the first time. Wow. That’s cool.

  • I have one file that is the list of all images (image name), title and description (since all images are in one directory, file names are unique). Call it the caption list.
  • Another file is the list of galleries – the name of the .txt file that lists the gallery contents, the gallery name and gallery description. Currently, only four lines (four galleries)
  • One .txt file each for every gallery; just a list of images in the gallery (the caption file contains the details, with the image name acting as the flat-file equivalent of primary/foreign key).

In this way, I can build galleries with whatever images exist; images can exist in more than one gallery – however, the title and description always resides in the caption file, so maintenance is trivial.

Ah, maintenance. That’s the second part.

How to maintain that – on my personal machine – and then push to the Web site daily (or whatever period I pick).

While flat files work great with Perl on my host, maintaining flat files doesn’t make a lot of sense. This really calls for a database app that pushes the data to flat files for publication.

Otherwise, it will be quite difficult to control.

So I’m thinking of building it as a PHP-mySQL application on my local machine. Build tools to add/alter the galleries, and then have a tool push the changes to my host.

Hmm…will be interesting.

Until then, enjoy what I have. I enjoyed building it, the twisted fool I am…

Geek Love

I confess – we geeks are a strange breed. (Actually, it’s surprising that we are allowed to breed…)

I had an algorithm for testing an e-mail address in Perl, but I just didn’t like it. Wasn’t robust enough for me.

I figured – and I’m sure I’m correct – that this has been a million times by a million people, and it would be for the taking somewhere on the Web.

Well, I found a couple of regexes that were close, but – again – not quite what I was looking for.

So I rolled my own (again…), and it think it’s what I want.

If the e-mail address doesn’t match this mask, invalid address:

/^([a-zA-Z0-9])+([\.a-zA-Z0-9_-])*@([a-zA-Z0-9_-])\.([a-zA-Z0-9_-]{2,4})/

Update 11/11/03: Improved below...
/^([a-zA-Z0-9])+([\.a-zA-Z0-9_-])+@([a-zA-Z0-9_-])+\.([a-zA-Z]{2,4})$/

Notably, what this does that my other one didn’t is the following:

  • Allows periods (dot), hyphens and underscores in first part of address (before @), but does not allow these special characters to be the first character.
  • Allows only one @ character (flaw in my last regex)
  • Requires 2-4 character domains (.ca, .net, .info). I haven’t checked this out at ICANN, but I think that 2-4 characters is the current upper and lower limits (another flaw in my last regex).

Go ahead, embrace your inner and outer geek…

Microsoft Acts | People React

As usual, Microsoft has been in the tech press recently (when aren’t they?).

There are a couple of issues that caught my interest:

MS Temp Fired for Blog Contents

According to his own blog, Michael Hanscom was fired from his temp position at Microsoft.

His crime? Taking pics of Apple G5’s being unloaded at the MS Campus and publishing the pics – and the loading dock’s whereabouts – on his blog.

OK, a lot of people seem outraged.

Why?

Mainly because of kneejerk anti-MS feelings, apparently.

This guy – in all innocence, to be fair – took pictures at work, published them on the Web, and disclosed the contents of what was being unloaded, where the dock was, where he worked and so on.

This isn’t good – even if there is a certain paranoia on the MS campus, why would they risk keeping this guy (only a temp, as well)? What will he photography/copy and post next? Code samples, meeting agendas, manager schedules? Sure, that sounds paranoid, but you just don’t out your employer in this manner, unless it’s a matter of public safety (which is why there are whistleblower laws).

Hanscom even tips his hand – in his blog entry – that he felt that pictures and postings could cause issues:

…”when I took the picture, I made sure to stand with my back to the building so that nothing other than the computers and the truck would be shown — no building features, no security measures, and no Microsoft personnel.”

Michael Hanscom

But then he posts the pictures with information about where the dock was and so on.

I’m sorry, I don’t feel too sorry for this guy – he screwed up, and – true – he didn’t mean anything malicious. But he did screw up.

Play you pay…

Microsoft Bets the Company on Longhorn

Even C|Net is getting into the speculation that this latest acknowledgement (not that it’s new info) from Redmond that Longhorn is a “bet the company” move, calling the gambit a

gamble.

I don’t get it.

This is a fulcrum point for MS – they can either (try to) keep selling WinNT-based OSes and virtually identical new editions of Office (is there anything in Word2000 that you cannot live without that’s not in Word95? Not unless you’re trying to used Word as a Quark-substitute).

This is the next stage in the evolution of personal computing, one that actually is predicated on the needs of business – like it or not, the DRM features, new file systems, services support and so on are squarely targeted at businesses.

Because it will make it simpler for Web services to (finally!) become commonplace.

Which is an interesting statement (if true) because that means that Web services won’t become commonplace for another four years or so (Longhorn due sometime in 2006; widespread adoption will take another couple of years after that).

But as far as the gamble MS is taking; I don’t think so. By the time the OS (and support tools, such as Yukon [new SQL Server]) rolls out, businesses will be ready. Businesses have been slow to embrace XP – sticking to 2000 or NT (but support is now gone, since June, I believe). Unless the OS is delayed too much (always a possibility) and businesses finally move to XP, there should be a real need for a new tool.

Especially if the businesses want to jump onto this new-fangled XML/Web Services thingee…

Meme’s the Word

meme – n.

A unit of cultural information, such as a cultural practice or idea, that is transmitted verbally or by repeated action from one mind to another.

– Dictionary.com

Loosely used on the ‘Net (especially in the Blogsphere), a meme is a sort of zeitgeist, something/someone with that intangible buzz. At least that’s how I’m going to refer to memes in this entry.

Today, some obvious memes are Google and the act I’m performing right now – blogging.

But – for reasons best left unwritten (not because I’m hiding anything, but the reasons are…meaningless and pretty darn boring – I’m been thinking about memes lately.

Mainly, I was thinking about memes of the past – things like that.

Expired Memes:

  • Zdnet: Remember Zdnet? Next to C|Net’s news.com, it was my favorite tech news site for a few years. And then C|Net bought it. And it’s been going downhill ever since – a couple of good columnists left, but not much else. And it really doesn’t differentiate itself from news.com, so what’s the purpose? (Yeah, ad dollars..)
  • Jon Katz: Love him or hate him, he has pretty much evaporated ever since he left/was cut from Wired.com – but he was relevant in some fashion for a while. Hell, Slashdot even has preference where you can suppress Jon Katz stories. While newbies probably never heard of him, Katz was a strong voice for some of the Web’s seminal years.
  • Browser Wars: Remember the browser wars? Sure you do… There are actually a new set of browser wars going on, this time not for installed base, but for standards support. It’s not in the regular media much because the fight is different: In the first browser war, MS wanted to own the browser to control the desktop. That didn’t really work out the way anyone thought it would. Today, the browser war is standards bodies and developers crying for standards…and MS doesn’t much care. How does that help them?
  • Netscape: Do I really need to comment?
  • Content is King: While I think the pendulum will, to a degree, swing back to this meme, right now it’s more flash (literally – Macromedia’s Flash) than substance.
  • Webmonkey: Remember when Webmonkey was relevant? A daily must-read? No more. Very sad.

Today’s Memes:

  • Google: While the Google backlash is certainly building and has been noted here, Google is still to search engines what Windows is to OSes – except most consider Google the best engine, while Mac and Linux/*nix users will – and can – present strong arguments for their choices.
  • Blogs: Again, there is a backlash in the works here – and the whole divisive nature of many eminent bloggers/blog tool makers has damaged adoption – but blogs have filled several important voids for many authors and readers:

    • Unbiased voices – single voices making a difference
    • Additional data – for reporters such as Dan Gillmor, blogs offer a way to supplement their stories, publish additional information that would never make it into dead tree publications (for many reasons). This cannot be a bad thing: Hey, don’t care about this extra stuff? Fine. It does not interfere with your print reading and so on. But it’s there if you care.
    • Publication ease – I’ve always maintained that the Internet’s killer app is not the Web, but e-mail. In the same way, blogging – for all its benefits – is (in my mind) most powerful as a simple way for anyone to publish. Sure, MS FrontPage is pretty easy and all that, but one still needs a domain (what’s a domain?), has to sorta understand FTP and so on. Fuggetaboutit. With some blog tools/services, all you need to know is how to use a browser and type. THAT’S damn powerful.
    • New life to the Home Page of yesterday: Today’s blog is yesterday’s My Home Page, to a large degree.

  • Wireless: Not a strong meme, but certainly one that is almost past meme because it’s been adopted so widely. Hell, it’s expected nowadays at tech conferences, and this will bleed over to regular conferences and other areas. Wireless is a stealth meme because there are so few reasons to fight against it. One may consider, for example, a tablet PC to be either an oversized Palm or a crippled laptop. OK. But the argument against wireless is probably only one of two: 1) protocol issues (a, b, g…), or 2) Security (hard-wired more secure than wireless, in general. Beyond that, wireless is a good thing. And these arguments are not Windoze vs. Linux issues. These arguments are for specific instances and are can be easily reconciled.
  • *nix: Linux is a meme in itself, but its also part of a larger meme, which can either be described as a Windoze backlash (in this case, not necessarily a knee-jerk reaction) or as a real trend: People are looking for a stable OS. With hardware becoming a commodity and increasingly powerful, OS software is becoming more interesting to folks. And Linux (stable, cheap, hard) is earning a lot of attention, as is Apple’s OS X – a BSD variant with a solid GUI slapped on top of it. While I run Windows, and will until it’s unnecessary (necesary now because most others do – vox populi standards compliance), I like the concept of Apples OS X – runs the Windows-type programs I need (MS Office, Photoshop) plus has the command-line interface that so many hate but I love. I’m always amazed that all the Linux talk and KDE vs. Gnome GUI flames take place without the explicit declaration that Apple has done what GNU/Linux community (Lindows, Wine…) has been trying to do for years: Rock solid OS with *nix underpinning that has a stable, attractive GUI and runs software people know and love – not just GIMP.

Fall Back

Yes, it’s time for most of the country – except most of Indiana and Arizona, I think – to revert back to Standard Time.

Time to reset all your clocks and change the batteries in the fire/smoke alarms.

I’m always amazed at just how many clocks there are in just my small house/small life:

  • Kitchen: Coffee maker, microwave, wall clock
  • Living rook: Just a wall clokc (Cuckoo clock, if you care..). I never use the VCR anymore, so that’s not touched
  • Bedroom: Couple of alarm clocks
  • Office: Desk clock
  • Bathroom: Wall clock

And this does not count the five computers I have (all currently set to auto change, by the way – Windows and Linux), cell phone (again, auto) and a wristwatch.

And I would say that I don’t have as many clocks as many – No clock in the dining room, none in the basement other than one on the computer there.

We are a time-obsessed society.

Search Me Redux

According to a story that ran in the WSJ (sub required; I won’t link), Amazon’s full-text search (see preceding entry) hasn’t won over one publisher: Tim O’Reilly. (View the TechDirt article.)

This is a little surprising, because O’Reilly is usually in tune with stuff like this – hell, the O’Reilly site has lots of free online chapters of books they sell – an inducement to buy the dead-tree book, of course.

And this is pretty much Amazon’s goal is, I would think (although they probably have loftier goals, as well).

And – interestingly (to me) – O’Reilly is quoted in article as saying, “‘If [Amazon ends] up being a Google for published content…we need to think better about what publishers get out of it.”

Which is pretty much what I alluded to in my last entry.

I wonder what really went on there…it seems like something O’Reilly would be all over.

Search Me

Wow, I was just at Amazon and saw the full-text search it has going now.

Wow.

According to this C|Net article, it currently searches over 33 million pages of text.

Again, wow.

And I don’t think this is the last we will hear about this search. It sounds like a Lexis – for literature – type tool that…well, kind of encroaches upon Google’s turf (or any search engine, but Google currently is the champ).

Going to be interesting.

Picture of the Day

I’ve gotten some feedback on my Pic ‘O the Day feature that I’ve added to the left-hand column, most asking just how I did it.

The assumption is that it’s database driven; it’s not.

Since I am on Blogger, I’m pretty much stuck with a static site that’s written out by Blogger from the database they host and own.

This is bad and good:

  • Bad: I don’t have control over the templates, database and other functions like I do in most of my sites/my other development efforts. I have to funnel all my efforts through the Blogger tool.
  • Good: Since I am at Blogger’s mercy, I have to “roll my own” if I want additional functionality.

An example of such is the RSS (XML) feed that this blog has – it’s a Perl script that runs every five minutes from my home box: Grabs the index pages, parses out the necessary elements (strips HTML..) and writes out and uploads the RSS feed.

Why is this good??

Because I learn from doing this stuff. If Blogger had a built-in RSS feed option, I would definitely use it. They don’t currently have one, so I set one up myself from scratch.

Is it elegant? Nah.

Does it work. Yep.

That’s good.

OK – back to the subject at hand: Picture of the day.

Again, this is a Perl script that I run from home (my host doesn’t allow CRON access…another obstacle!).

Basically, it uses the Net::Telnet package. Using this package, I – though the Perl script – perform the following tasks:

  • “Telnet” to my server and log in
  • Get listing of all JPGs in the full-sized image directory
  • Select one of these images at random
  • Copy the thumbnail and full-sized image that is today’s random picture to “random.jpg” in each directory (full-sized and thumbnail).

That’s it – about 20 lines of code, reproduced below:


#! /usr/bin/perl

$myServer = "[server name]";
$myUsername = "[username]";
$myPassword = "[password]";
$imagesFull = "[path to full images]";

# create telnet object
my ($t);
use Net::Telnet ();
$t = new Net::Telnet;
$t->open($myServer);

## Wait for first prompt and "hit return".
$t->waitfor('/User\@domain:.*$/');
$t->print($myUsername);

## Wait for second prompt and respond with password.
$t->waitfor('/Password.*$/');
$t->print($myPassword);
$t->waitfor('/vde.*$/');

## Read the images in the full-sized image directory, one per line
$t->cmd("cd $imagesFull");
@remote = $t->cmd("ls -1 *.jpg");
pop(@remote); # remove last element; the shell cursor

# get random pic
srand;
$random = $remote[rand @remote];
chomp($random); # remove line feed from STDIN

## copy this file to the "random.jpg" file in the full and thumb dirs
$t->cmd("cp $random random.jpg"); #FULL pic
$t->cmd("cp ../thumb/$random ../thumb/random.jpg"); #THUMB pic

exit;

That’s the hard part: Then I just set a Cron job on my local machine and it fires at the interval I want. (I haven’t firmed this up, but I’ll probably stick to once a day.)

I had originally set this process up with the Net::FTP module (because I had done work with this module before), but this didn’t make a lot of sense – I could easily pull back the directory listing, but FTP doesn’t support remote system copy operations (delete only).

So I initially had a script – that worked fine with using Net::FTP – but that meant I had to find the random image (no biggie), but then I had to download the day’s pic and upload it again with the new name (random.jpg).

For both the full-sized pic and the thumbnail.

Doesn’t make a lot of sense to do four file transfers – and the full-sized images can/could be quite large – when two telnet commands (“copy [pic o day] random.jpg” for each – full and thumbnail – image) will do the same thing!

I knew there had to be a better way.

And I finally (thank god for the Internet & Google!) found the Net::Telnet module.

Installed it from CPAN, and got it up and functioning inside of an hour. I was able to copy a lot of the Net::FTP code (find random image…) right into this new script, and all was well.

One thing I did have to mess around with was the login part – this is not as seamless as the Net::FTP module (though I’m probably missing something).

The telnet script, much more than the FTP script, requires one to have actually done the scripted processes via the command line. Little differences crop up.

For example, the FTP script pulled back – just using “ls [image directory]” all the images into a straight-into-an array manner.

With the telnet script, I had to do a “ls -1 [image directory]” (to get single column listing) and returned all elements with a line feed following (like STDIN). So I had to chomp the selected image to remove this.

In addition, the directory listing – at least on my host – returned, as the last element – the shell cursor (i.e. “[$bash 2.0.1]#” or what have you). So I had to use the pop() command to remove the last element.

I’m not complaining, but it does seem as though the Net::Telnet module is not as generic as the Net:FPT module – or maybe it’s that telnet is not as generic as FTP.

Whatever. It’s done. Little bit of work (a couple of times) and I learned a bunch.

That’s the bonus of Blogger – you’re forced to learn to advance.

I’m cool with that.