Monday, June 8, 2009

Using Twitter To Do Market Research


If you are a company selling products and services, would you use Twitter to do your market research? Some do but it does not seem to be such a good idea.

Twitter is a very popular, simple and easy to use social networking and micro-blogging service. Due to that, a lot of people are using it. Of course, people mean consumers and lots of people gathering together definitely mean lots of potential customers. So, it did not take long for companies to put one and one together to conclude that Twitter is where it’s at. The place where the pulse of the market can be felt.

Due to this, some companies do their market studies using Twitter. They listen to what customers have to say, what products or services they like and do not like, and what people say of their products or services and that of their competitors.

Listening to what people tweet about your product and that of your competitor’s is obviously a wise thing to do. You can learn a lot from that exercise. However, doing a market research there will not give you reliable results.

An article in Reuters said that just a few on Twitter do all the tweeting. That article mentioned a Harvard study which found that only 10 percent of Twitter users generated more than 90 percent of the content. These are the more active and vocal users. Therefore, much of the information you gather will only come from a select few people.

So, by doing your market research on Twitter, you are actually basing you market research on just approximately 10 percent of the total Twitter users. A far cry from even just the majority. Still, it is not a fruitless exercise. Companies can still glean a lot of helpful information that way. Just don’t expect a scientific outcome.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Friday, June 5, 2009

Care For An Ubuntu Netbook?


Netbooks are currently coming in strong. And why not? These netbooks let you do most of the things you need to do without the bulk, weight and price of a full-pledged notebook.

But the small size of netbooks means smaller memories, storage capacities, and computing power. Enter Ubuntu Linux. Ubuntu Linux is one of the most popular Linux distributions around. From nowhere, Ubuntu Linux skyrocketed to the top. And since it is a Linux OS, it requires fewer resources. This means that Ubuntu Linux and netbooks are truly meant for each other.

“What about the hardware compatibility problems?”, you ask. Well, not only did Ubuntu Linux help netbooks by providing a stable and powerful OS while using less resources, netbooks also return the favor by playing within the strengths of Ubuntu Linux. Being small devices, netbooks do not have  a lot of devices for Ubuntu Linux to be incompatible with. The graphics card is definitely not top of the line with a lot of features so, not much incompatibility there either. And since it is a portable device, not much external devices are attached to it. With netbooks, the strengths of Ubuntu Linux are highlighted and its shortcomings, downplayed. In short, Ubuntu Linux or any other Linux distribution could really shine in this portable platform.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Wednesday, June 3, 2009

A New Online Pastime—Scam Baiting


Many have received that email from somebody which spins a tale of unclaimed money waiting to be claimed if only the recipient is willing to help. And of course, send advance payment to the scammer. This email scam has been going on for quite a while in the internet. But that is not the sad part. The sad part is that still more people fall for it. It is not only a loss for those people but also an encouragement for the scammers to continue doing their despicable work.

Enter the new online pastime—Scam Baiting. With this new kind of online gaming, scammers are being scammed. They travel for miles, sometimes thousands of miles, in search for non-existent cash, go to dangerous places that could get them killed, risk dehydration in a very hot and humid place, meet people who will not be there, get arrested in the airport, spend cash along the way, and go back home empty handed after getting stranded.

These online vigilante group called scam baiters said that this has two beneficial effects. One, while the scammers are busy getting scammed, their efforts, attention, energy and resources are redirected from real victims; Two, it will hopefully discourage them from scamming people again in the future because that prey might in fact be a predator. In addition to those reasons, I also think that another side benefit is that the scammers will feel how it is to be scammed—how it is to be on the receiving end of the scam. Another thing is as a petty punishment for all the hassle and grief they had caused.

Now, another online game is in cybertown. And the victims are fighting back.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Monday, June 1, 2009

When Google Goes Down, The Web Panics – What This Means to Cloud Computing


Well, that could be a very strong statement, I agree. But we can’t deny how we are very dependent on Google and all it’s services.

You know, Blogger, where this blog, along with other blogs, is hosted on; Feedburner, which ensures that our readers get fed; Picasa, where all the images in our blogs are hosted; Gmail, which gets the mails delivered and received; Google News, for our daily read; and of course, the one and only Google Search.

So, it does not come as a surprise that Google became quite a bit of a sensation in the micro-blogging platform Twitter when it suffered a failure last May 14 in the morning EST. I do not know if the cause of failure is already publicized by the time this post is published but at the time this is being written, there is no clear explanation yet.

There are rumors that Google security was compromised. I have not confirmed this. But if it were true, then it is very troubling. It is something we need to consider very closely before we can even attempt to take computing up into the cloud. I already mentioned this in a previous post that security is a major consideration in cloud computing.

Even if it were not true, and that it was just a reliability problem, it is still something to consider in cloud computing. As long as it’s down, it’s down regardless of the cause.

If this can happen to the mighty Google, it could happen to anybody including vendors of cloud computing. If it does and cloud computing goes down, your business might go down with it.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Friday, May 29, 2009

A Computer Assistant In Your Car


Think about it. Most cars today already have a computer chip. This computer regulates stuff such as fuel intake, air-fuel mixture, ABS, and traction or stability control based on certain inputs like vehicle speed and wheel rotation, among others. With the aid of other technological advances, maybe we can increase the variety and usefulness of this computer’s IO.

Distance indicators, like radars, infrareds, and lasers are already a mature technology too. We have been seeing this in the rear bumpers of a lot of late model cars. They are particularly helpful when you are backing up your car into the garage or when parallel parking in a tight space.

As to wi-fi, we already underwent a series of revisions and this also seems to be a mature technology. It has been helping us in exchanging data and information using wireless communication networks.

Finally, we have the GPS, another mature and quite helpful technology. With it, hopefully you will never miss another turn. But even if you do, it is there to help you find your way again. Ever heard of that familiar voice saying “recalculating”?

If we mix this all together in the right proportions, guess what’s cooking? A very tasty technology treat.

Just imagine if the cars can communicate its properties such as weight, distance, and velocity to another car. Both car’s computers can calculate stopping distance and can activate the brakes at the right time and with the right pressure. Or at the very least, it can warn both drivers of an imminent collision if such is the case so that they can steer away of it if possible.

When worse comes to worst and a collision happens, the computer can determine the precise moment to deploy which airbag, depending on where the impact occurred.

A lot of rear end collisions can also be avoided. When travelling along a long interstate after just a few hours sleep, some driver’s attention might be a little less than what’s required. Given the speed in such highways, a small delay in reaction time can cause major problems. But a computer never gets sleepy. It constantly analyzes input from a front mounted radar and manipulates the brakes intelligently when the distance becomes a little too close for comfort. It can also slap you with a mechanical arm to keep you awake. Better that than the cops slapping you with a ticket.

Or how about the GPS system knowing that you have to turn right in the next corner so it not only warns you of it but also slows you down. After all, you really need to slow down if you need to take that turn, traffic or no traffic. If the light is red and it’s a no turn on red corner, no sweat, you can completely stop the car. At least it already slowed itself down for you.

Or how about this. The GPS together with your radar notices that you’ve been swerving a bit in the last couple minutes. Instead of leaving you alone for the cops to notice your state of wakefulness, it beeps, flashes a light, slaps you again in the face with that arm and gives you fresh cappuccino. Well, I was just kidding.

With all the mature technologies currently in our hands, we can perhaps develop a safer car. The key is in the creative and proper interface and programming of those existing components.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Wednesday, May 27, 2009

Visual Representation of Information


We humans are a visual bunch of creatures. That is why there are more people watching TV than reading books. Even at an early age, most find it easier to distinguish colors than musical tones. Most children find it more difficult to identify a given note than a given color.

Anyway, how do we take advantage of our visual prowess in understanding and interpreting complex information? Simple. We just have to make the complex information visual.

You may say that this is not new. We already have graphs and charts that simplify the interpretation of numeric data. But what I am saying is something more complex than complex numeric data. Graphs and charts are getting old, you know.

We also currently use visualization techniques not only to understand data but also to perform actions. Think of dragging a file to the recycle bin or the trash in order to delete it instead of typing rm -i <filename> at the command prompt. But again, that is as exciting as watching paint dry and does not make our hearts race, does it?

What about seeing a bug in your screen if you are infected by a virus? What about an anti-virus that looks like a bug swatter to get rid of that virus? What about having visual access to files being “eaten up” by the virus?

The same could also be applied in the field of information security where you need to analyze a lot of complex data and log files. If you do this manually, the intruder will be long gone and has covered his tracks before you figure things out. Imagine if you can be visually informed of an attack when it is taking place and can track the intruder with visual tools. It will be fun and gratifying to graphically kick him or her out of your system.

Well, these are just musings of a writer whose imagination went wild. But then again, wouldn’t that be interesting?

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Monday, May 25, 2009

Computing In The Clouds


Computing in the clouds. Wow! That sounds a lot like computing Nirvana. Are we there yet?

You may have heard of cloud computing. But unlike how it sounds, it is not about heavenly computing or computing without tears. It may not even be a BSOD-free experience.

What is it then, you ask? Well, there seem to be no universally accepted definition. I searched the Web high and low and found that different vendors have different ideas of what Cloud Computing is or should be. They define it in a way that would better position them in the Cloud Computing market.

But generally, it is all about computing where the manipulation of data is done by, or the storage located in, a network of computers. This network of computers could of course be the Internet but can also be any other network.

The idea is to send your data up to this network. This network will then utilize the resources of the machines connected to it and send the results back to you. You can say that this is an implementation of the vision: The network is the computer.

With this infrastructure, it is possible to use a puny little netbook yet still have access to the power of a supercomputer. Lovely, isn’t it? We only have one problem. Security.

Take a look at our current configuration now. Computation and storage is done locally in our machines. We only connect to the Internet to browse. Still, we hear news of intrusions, worms and viruses. How do you think would it look like if, instead of our computer connected to the Net, the Net is our computer? That would now be easier for system intruders because they are already part of the Net which happens to be our computer. It is like he or she is there beside you and with keyboard and screen access to your machine. Not a lovely thought.

Cloud Computing might be the future. But we have yet a lot to do to get there.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Friday, May 22, 2009

Windows 7 Has Depth


Windows 7 is featured in a lot of tech news lately. Its new features had been advertised extensively. But as it turns out, not all new features were as well advertised as the rest.

Mike Williams listed 15 things you need to know about Windows 7. What is fascinating is that these features are really cool features. Yet, they are not being trumpeted out aloud. Why could that be?

Normally, a company will highlight the best features of their products. Not just the superficial ones. Yet in the case of Windows 7, the great features here are not as publicized.

I have a possible explanation. I noticed that this list of 15 features is a bit technical. They are things that most regular users will not be interested in. Most of these are also behind-the-scenes improvements.

So what Microsoft did was to tone them down a bit. They are targeting users who are not really that tech-savvy. Crowding the features list with a lot of technical features that these users don’t understand or know how to use will only confuse them. So they focused on things people understand. Like a bigger thumbnail and touch screen capability.

These means Microsoft is starting to make things simpler. They are now willing to omit some details which may confuse their customers.

But whether it is publicized or not does not matter. It will still be in Windows 7 and geeks like me will be having a field day for it. This is an example of how competition can help everybody. I mentioned that in a previous post I made. Without much competition, there is less or no motivation to improve. Without Linux and OS X, we might still be in Windows’ dark ages.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Wednesday, May 20, 2009

Linux Is Gaining Market Share


I found out from the Berkeley Linux Users Group that the Linux market share just passed 1%! This is good news both to Linux users and non-Linux users—including Linux haters.

Looking at the chart, it is noteworthy that aside from reaching 1% market share, it is increasing linearly—a positive slope!

Why is this exciting? First, one of the problems in Linux is less than ideal hardware support compared to Windows. The reason is that hardware vendors keep the interface of their devices secret thus making it difficult for third parties to develop device drivers for Linux.

Some hardware vendors do make their own device drivers for Linux in binary format but usually, these are buggy or not updated regularly. They do not have the motivation to do serious work on their Linux device drivers because only a small percentage of their customers use Linux.

But this development is going to change that. With more people using Linux, hardware vendors will start to take their Linux device drivers more seriously. Thus, Linux users should celebrate. Moreover, if the hardware support becomes better due to vendors releasing quality device drivers, more users will come. This will cause an upward spiral.

So what’s in it for non-Linux users? Well first, without the threat of Linux, Microsoft will be tempted to rest on their laurels. But with stiff competition from a free OS which is gaining market share, they are sure to burn the midnight oil. Bottom line? More innovation. Windows users will get a better Windows OS. The same is true in the Mac camp.

Competition is good. Everybody should celebrate.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Monday, May 18, 2009

Think of Your Fingers and Choose Your Keyboard


In the past, when I build my personal computer, I think of the system board first. I decide which of the current offering offers what I wanted and see whether it supports Intel, or both Intel and AMD. Then I choose the processor brand, type, and clock speed. Next on the list is the make and model of my graphics card and the brand and size of secondary storage.

After buying everything that goes inside the grey box, I go for a monitor, keyboard, and mouse—almost as an afterthought. For the keyboard and mouse, the only consideration is price—not brand, not model or anything else. Price. The cheapest I can get. Just to complete the system so that I can boot it up and go on with my work or the fun stuff.

Then the time comes to upgrade. A new game won’t run on my current video card or processor. Sometimes, a new software needs more memory than the slots on my mother board would allow. Therefore, I have to change the main board along with all the components attached to it. The hard disk also starts to run out of space.

My IO devices (read: keyboard, mouse and monitor) however, remains. They stay with me while I change the other components. In a way they are more permanent. You don’t change keyboards every time a new version of Windows or The Sims comes out.

So, it makes perfect sense to invest in quality keyboards and other IO devices as you won’t be throwing them away any time soon. Add to that the fact that most of your time, when using the computer, is spent typing on the keyboard, staring on a screen and moving the mouse around. You will then see IO devices in a whole new light and appreciate them more.

That is why, when I found this article about 13 Super Cool Computer Keyboards, I felt that I have to share this find with you.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Friday, May 15, 2009

Drag Racing For Geeks


You think geeks don’t join competitions? You think geeks don’t feel the need for speed? You think geeks don’t wanna race? Think again.

A new competition is emerging in the hardware technology scene. Just like in drag racing, speed is the goal. Also, like in drag racing, they tweak hardware to achieve that goal.

The difference between this geeky competition and drag racing is that the hardware tweaked in the former consists of computer components instead of a Civic’s intake and exhaust ports, valves and their timings. Also, the aim is getting higher benchmark scores or finishing with the fastest time instead of making a 10-second lightweight rocket of a car.

One such competition is the Gigabyte Open Overclocking Competition. The North American Final is hosted in California and the World Championship, in Taipei, Taiwan.

Needless to say, there are lots of mouth-watering hardware goodies to win in addition to a round trip ticket to Taipei, Taiwan. So, gentlemen, start your systems!

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Wednesday, May 13, 2009

Making Screencasts in Linux


Linux is becoming more user-friendly each passing day. But it still has a lot of applications with varying ease of usability.

To help new users, it would be nice if there will be plenty of video tutorials on how to use applications which are commonly used in Linux. Video tutorials of using Linux itself, specially the command shell, would also be welcome.

To do that, we need an application which is capable of recording what is being done on your desktop. You should also be able to narrate what you are doing. If you are using a Macintosh, no sweat—you have Apple’s Quicktime at your disposal. There are also other applications, I am sure.

For Linux, I found three applications which will help you record your Linux desktop. This is precisely what we need to use in helping Linux newbies.

The article, written by Craciun Dan, enumerates three applications:

There are also other applications mentioned in the comment section of Craciun Dan’s blog post.

Now, we only need more Linux movie producers. And, action…

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Monday, May 11, 2009

Web Security Is Important Specially for Sites Like That of The MPAA


It is said that the moment you have an online presence you immediately risk a security intrusion. You are already a nice fat target. The only question is whether someone would want to shoot you.

In the case of MPAA, everyone knows that there is no shortage of people who wants to make the shot. That resulted from the recent prosecution of ThePirateBay.org. Some people got so pissed off that they want to bite back at anyone from the opposing camp or those closely associated with them. Coincidentally, these people are not really technically incompetent, to say the least.

So, it is reasonable to expect that the MPAA should have strengthened their defenses in anticipation of possible retaliation. But an article I came across seems to point otherwise. It looks like the website of MPAA has some vulnerability. This vulnerability allows random code to be injected into the MPAA site.

Oh my. This better be fixed. And fast.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Friday, May 8, 2009

Incremental Web Design


Businesses need to reach out to their markets specially in these rough times. However, they need to do so in a cost effective manner because it would be ironic to do otherwise.

One of the ways to do that is through cyberspace. By using the Web, businesses can reach the most number of audience.

Online advertising is cheaper than traditional real world advertising. They also reach a wider audience. But an even cheaper way is to make an incremental design update on your own website.

It does not need to be a complete overhaul. Just a small, visible eye candy update to freshen things up a bit. The keyword, of course, is incremental. Meaning, the change is not random but is designed to be part of a much bigger design and can also stand alone by itself. You can also call it Design by Installment.

This is done all the time in the desktop applications world. Companies release software and periodically issue updates or service packs.

Using this strategy, you can break down the cost of a thorough Web design update by spreading the implementation over a period of time. But for this to work, the overall design must be designed to be modular.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Wednesday, May 6, 2009

Computers and Artificial Intelligence


Something is cooking in IBM’s ovens—a computer program intelligent enough to go head to head with human contestants of ‘Jeopardy!’ .

This will be a huge step for man in the field of artificial intelligence if they can make it work. Imagine the possibilities if we have computers that can understand human language and act based on that and an extensive database of stored information.

But understanding human speech is not an exact science and we still have a long way to go. There simply are things that computers will find hard to deduce.

For example, when a person says: “I did not steal the money.”, it means just that. But when the same person say: “I did not steal the money.” while emphasizing and putting a stress on the word I, it has the same meaning as the first statement but with an implied statement that someone else did. That the money was stolen is admitted.

If the person, instead, said: “I did not steal the money.” while emphasizing the word steal, he is saying the same thing as in the first but with the added meaning that he took it by other means such as by asking for it. He is saying that he has the money but that the taking is other than by stealing.

Had the emphasis been on money, it will mean: “I did not steal the money.” (I stole something else.)

Distinguishing the different meanings from the same statement based mainly on where the emphasis is placed is a tough job. It might be doable but maybe not today.

So, can the burst suggest an orchestral microprocessor?

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Monday, May 4, 2009

The New Ubuntu 9.04


Ubuntu is a really unique Linux distribution. It came on the scene late but was able to grab the spotlight without much ado and held on to it since. That is something.

I had the opportunity of using Ubuntu in the past. It was a major innovation then. And based on the reviews I read, this new Ubuntu 9.04 is even better.

Ubuntu 9.04 lives up to the hype. It is what it is claimed to be. Major improvements over previous versions and even over other distributions have been made.

Support for older hardware is better and boot times are faster. Linux is known to be stable but this release is way more stable than previous releases. The new GNOME 2.26.1 is also significantly better such that even KDE lovers might give it a second look.

Better Linux distributions plus a better Snow Leopard plus a better Windows 7 equals three great choices for users. Ah, tech life is good.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Friday, May 1, 2009

A New Way of Dealing With Piracy


In the past, piracy is always met with lawsuit or a threat of lawsuit. Look at what happened to Pirate Bay. Some developers are trying a different approach though.

I just came across an article relating how a game studio from Chile has dealt with the piracy of their game. The ACE Team is the developer of the game Zeno Clash which is a first person shooter game. They met piracy head-on—not with a threat, but with an appeal:

I’m one of the developers of Zeno Clash. I would appreciate you read this if you are about to download this file.

Zeno Clash is an independently funded game by a very small and sacrificed group of people. The only way in which we can continue making games like this (or a sequel) is to have good sales.

I am aware that at this moment there is still no demo of the game, but we are working on one which will be available soon.

We cannot do anything to stop piracy of the game (and honestly don’t intend to do so) but if you are downloading because you wish to try before you buy, I would ask that you purchase the game (and support the independent game development scene) if you enjoy it. We plan on updating Zeno Clash with DLC and continuing support for the game long after it’s release.

Thanks for taking the time to read this… hopefully it will make a difference.

Carlos Bordeu
ACE Team

This is a new approach. Whether this will work or not, only time can tell. The ACE Team seems understand that they can’t fight fire with fire. So, they decided to fight fire with water. Let’s see if the fire dies out; or at least is reduced to embers.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Wednesday, April 29, 2009

Apple-like PC Hardware


Thanks to Apple’s hardware design innovations, people are beginning to see the importance of design in products. Plain grey boxes won’t cut it anymore. To be competitive in the computer hardware industry nowadays, the product has to do more than just work&mdash;it must also look good.

In the trendy notebook category, one of the latest entries is Dell’s Adamo. Clearly, it has the good looks. This development is lessening the pain of Apple-envy. You can now carry around a hardware which you don’t need to hide in shame every time someone whips out her cool Macbook.

But carrying an elegant PC hardware is one thing, using it is another. Upon turning the beautiful thing on, reality hits you square in the face. Windows still lurks in it. Thus you are still limited by Windows’ quirks and unreasonable will.

It is true that PC manufacturers like Dell, Lenovo and Sony are trying to produce nice Apple-like Hardware&mdash;which is a good thing, really. However, Hardware is only one part of the equation. System software plays another and more significant part. Moreover, their integration is not something to be put aside. On these aspects, it’s hard to beat Apple.

Sure, PC manufacturers can also have the ability to design nice hardware but they do not have a say in the design of Windows. The result is that the usability of their products is beyond their control. Apple, on the other hand, has beautiful hardware, a great system software, and nice integration between the two. That’s a tough card to beat.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Monday, April 27, 2009

Improving Microsoft Windows


When Apple shifted its operating system from MacOS 9 classic to the Darwin-based MacOS X felines, it was very well received by the market; and for good reason. MacOS became much more stable and flexible, gained preemptive multi-tasking and other capabilities while retaining ease of use which the Mac is known for.

The Mac OS classics certainly had a pretty face but did not have washboard abs. OS X changed that by replacing the core with Darwin.

I was just wondering if Microsoft could also do the same thing with their Windows OS. Their user interface may not be as pretty as OS X’s, that’s for sure, but it is definitely more familiar than KDE or Gnome (familiar, but not necessarily better). If they’d follow Apple’s lead, they could also produce an OS with the familiar Windows interface and a rock-solid core.

They would have a lot of kernels to choose from. They could help develop Hurd, use Linux, or maybe any of the BSDs. Personally, I think the BSDs—more particularly, FreeBSD—would be better. First, I have the impression that it is better engineered; Second, the BSD license is more commercial-friendly than GPL. But they can always use Darwin too…

What’s clear is that Microsoft seems to have hit the wall in OS innovation. Windows 95 was good in its time, Windows XP also has been nice, but Vista was delivered late, with less features than expected and awfully bug-ridden.

To aggravate the situation, it is not yet long since Vista’s birth but Microsoft will be releasing the new Windows 7. This seems to be an admission by Microsoft themselves that Vista is no good that is why they will be replacing it ASAP—as if Vista never happened to disgrace Microsoft.

Contrast this to Windows XP. It lived to a ripe old age before a replacement from Microsoft came over; and even with that replacement, XP refuses to die. Vista, on the other hand, has not reached puberty yet but Microsoft is relieving it of its post.

That is a sign. If only Microsoft can perceive it.

That’s what I like about Apple: they can recognize a good thing (Darwin) when they see one. If Apple can do it, Microsoft should be able to do it—that is, if they want to. It is an indisputable fact that the change breathed a new life to Apple. Would Microsoft want the same success?

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Friday, April 24, 2009

Anti-Microsoft Sentiments


I do not fully understand the hate surrounding Bill Gates and Microsoft. Normal criticism is acceptable but what goes in this page is way overboard. When reading some Web pages and “Micro$oft” jokes, it seems like it is fashionable to hate these guys.

Me, I’m not really a fan of Microsoft; but I don’t call them “Micro$oft” either (or Micro$hit, Windoze, or whatever is the creative spelling of the day is). Like I always say, tools are just tools and not rallying points. You don’t hate or love your wrench, do you?

If the cause of hatred is the inferiority of their products, I do not think the hate is justified. If you think Windows sucks, why live with it in hate? It’s not like you have to go to court and file a divorce or something.

You can, instead, go the Mac or Linux route (or Solaris, FreeBSD, BeOS, Darwin, whatever), be done with it, and be on your merry way creating good things for other people. Can’t switch because your company forces you to stay? Then bash your company, not Microsoft.

Other companies, like Oracle, also made bad design decisions; Apple got it wrong with the Newton; but the focus seems to be solely on Microsoft.

Sure, Microsoft has created some duds like ActiveX but great products have also been coming out their doors—Microsoft Office, Expression Studio, and Visual Studio come to mind and I’m sure there are others. Have we not learned that only those who never tried never failed? Do I have products that suck? No. Why? I haven’t released any, that’s why!

Could it be Microsoft’s business model? Well, theirs is not exactly the most blameless organization out there but it is also not really that evil as some would lead us to believe.

Yes, they want to profit from their product—and who doesn’t? And yes, they take measures to protect their property from theft, like WGA, which in some ways irritate some users. Well, people also fence their homes to prevent theft, don’t they?

I myself am an open source advocate. I like transparency in code, free software and mutual sharing. But that doesn’t mean I hate proprietary software. For me, choice is good. Why fight proprietary software? If they’re not as good as the free alternatives, they’re simply going to fade off anyway. If they make it, they might, at least, be good for something.

True, they might have ideas that are different from our own but should anyone suffer for that? Copernicus was beheaded for his heliocentric idea which happen to deviate from what was accepted during his time. Women were burned at the stake also for being, well, different from other women doing ordinary women stuff. A lot of people have already suffered in the past for being different. Shouldn’t we put an end to this and start respecting each other’s ideas?

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Wednesday, April 22, 2009

Proper Design Invokes The Right Emotion


Whether it is a website or a logo, a good design stirs the emotion of the audience. You get a distinct feeling when you see the theta logo of Toyota that you associate with the company. So too with the stylized H of Honda or the stylized A of Acura. Would Mc Donald’s be the same had its logo been different? Probably not.

This effect is specially important when branding and company image is involved. Like I said, Mc Donald’s bright yellow colored twin arched M has been so much a part of the company that it’s hard to think about Mc Donald’s without that sign or logo popping in your mind. It is therefore important to put some thought into what image you wish to be associated to your company or brand.

Achieving this effect is no mean feat. The designer has to go through a lot of research and study about the client, the intended audience and even the company’s competition. A match must be made otherwise, it may either not reflect the company properly, or not appeal to the company’s target market.

Due to the effort involved in making a proper design, it is a little bit on the expensive side. Design is time consuming. Take Google, for instance. It is not testing just 2 shades of blue and trying to decide between those—it is testing 41 shades of blue!

Apparently, Google thinks that not all shades of blue are created equal in inducing visitors to click at stuff—and they’re right; different designs invoke different emotions. And that’s even just one aspect of the whole design equation.

But there are also cheap logo makers and webpage creators out there who churns out (almost) mass produced graphics. (Note that I did not use the word designers) They can afford to offer cheap works because they don’t put in the time to really go through the design process.

These instamatic graphics makers are killing the real designers. A person who is in the market for a new logo design would not know the difference (who could blame them, they’re not designers) and obviously, would settle for the cheaper ones.

It is tough to sell good design to these people considering that there is no quantitative test to determine efficiency and yield. You cannot say to your client that her return on investment for this logo is such and such. So, customers go for the numbers—price.

Due to the above, I cannot blame Niki Brown for his rant about these el-cheapo operations. I guess there’s hardly anything to be done about it so, caveat emptor!

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Monday, April 20, 2009

Microsoft Expression Web


When I decided to venture into Web design and development I ran Internet searches to check out some tools because, as you may already know, this is one field where a lot of changes take place at an alarming rate and tools and practices you learned and use might be outdated in the blink of an eye.

Of course, there is Dreamweaver CS4, the de facto standard in professional Web design and development—no surprises there. It has been taking that spot even during the time when Adobe’s own GoLive was still on the software map. It’s kind of an all-in-one tool where you can do lots of stuff in addition to (X)HTML and CSS code editing.

Then there are also the free and the open source tools like Komodo Edit, Aptana, KompoZer, N|vu, HTML-Kit, SeaMonkey, Selida, Amaya, CoffeeCup free, AlleyCode HTML Editor, AceHTML Freeware and others. They may not have a lot of Dreamweaver’s integrated features but if you’re willing to modify your workflow a bit to incorporate other external tools, you can have a complete Web development tool chain at the price of zero. These tools rock!

Somewhere between Dreamweaver and the free tools, lie Microsoft’s Expression Web. The first thing you have to know about this application is that while it replaced FrontPage, it is not FrontPage with a new name and is not targeted at the same audience FrontPage was. Another thing is, it has a strong focus on Web standards. Microsoft has a legitimate Dreamweaver challenger this time.

It may not currently have all the niceties of Dreamweaver but it also does not cost as much. Furthermore, Microsoft Expression Web might close this feature gap in the not so distant future.

That Microsoft is bent on closing this gap can be seen in their addition of PHP support in version 2 despite the fact that it is a direct competitor to their very own ASP technology. I’ve even read that Microsoft Expression Web supports Adobe’s Flash better than it does Silverlight—Microsoft’s own technology.

If Microsoft wants to, it seems like they can play well with others too. I just hope they’ll keep it up, and not limit their products by making it work only with their own.

Choice is always good.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Friday, April 17, 2009

Why Microsoft FrontPage Didn’t Suck and Why That Matters


In a previous post, I talked about how you sometimes have to choose between ease of use and flexibility in designing applications. And as I said, the choice heavily depends on your intended audience. In the case of Microsoft FrontPage, I really think Microsoft had it right.

I agree that Microsoft FrontPage was not as feature rich as the Dreamweaver of that time. It was not meant to be. I also very much agree that it does not produce a clean and standards compliant code. But let’s set that aside for a while and look at why that is so and whether that was a smart move.

A WYSIWYG webpage creator/builder or visual webpage editor is a terribly complicated piece of software. Unlike word processors, WYSIWYG webpage creators have a lot more decisions to make. In the former, you can change the size of your font and the program just has font points to worry about; in the latter, the program has to consider whether to use percentages, points, pixels, ems, smaller (or bigger), etc. And we haven’t even gone to the layout aspect yet.

“Let the user choose the font unit”, you say? Uh, right; but who are the users of Microsoft FrontPage anyway? That it is included in Microsoft Office should give us a clue. They are the office guys and gals who are the same persons who use Word, Excel, PowerPoint, etc. These guys are far from being Web developers, and using pixels, percentages and other relative font sizes, among other things, would not make sense to them.

Being targeted to these non – Web developers, FrontPage need not be as feature-rich as the offerings directed at the Web professionals. It also has to be easier to use.

The last statement above would have been a major pain for the developers of FrontPage. Being easy to use means not bothering the (non – Web developer) users about a lot of the aspects of webpage layout and design. That means the program has to guess in many instances. The result is a generated code which is not elegant by any stretch of the imagination.

Well, that’s alright, it’s not for professional designers anyway. As long as it renders correctly in the viewer’s browser in their own private intranet, who cares about the code? But to ensure that this less than ideal code does render correctly, Microsoft has to tweak the browser. Of course, it can’t tweak Netscape’s browser or other browsers so it tweaks its own Internet Explorer to make it render FrontPage generated pages more properly. Herein starts the accusation that FrontPage created pages only work with Internet Explorer.

Based on the above, I can’t see how FrontPage sucked; I guess it didn’t. It could only be if you tried to erroneously compare it with Dreamweaver which is clearly intended for a different set of audience and which clearly has different design parameters, although both of them generate code from visual designs. If not compared to Dreamweaver and used only where intended, like in personal sites or private intranets, I think FrontPage was an OK product.

You might be wondering why I ramble about the unfair treatment a long-gone product had. The reason is that while FrontPage is no more, the mistaken idea that it sucked lives on to this day and is unfairly inherited by the new Microsoft Expression Web. If you don’t believe me, just try doing a Web search. That Expression Web is way better than FrontPage ever was makes the unfair stigma doubly unfair.

Perhaps Microsoft figured that since their lightweight WYSIWYG Web page builder had been pitted against heavyweight Dreamweaver despite the impropriety of the match, they might as well set loose a proper contender. I think they’re doing a pretty good job so far.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Wednesday, April 15, 2009

The Joy of Programming


First there is the desire to create something useful. You might be doing something repetitive or tedious on your computer, like inspecting a log file, doing repetitive calculation or renaming a bunch of files, which you think is better done by a computer.

You then stop and analyze your problem logically. You fire up your text editor and list all that is required of your program. Check that the problem has been properly addressed and nothing has been left out.

Next, you move over to the creative part—figuring out a solution creatively. You start in the box and take a look around; after that, you are well-armed to transcend the box and do your thinking from there. While this part can be done analytically, creative solutions are best.

Now that a solution is in hand you go on to the next stage—design, another creative thing. You design the interface first then the overall architecture of your program next. The user always comes first, as they say, so it makes a lot of sense to start with how your program interacts with them. This step can also be done analytically—but we all know what has come out of that approach.

Finally, you implement the design analytically and logically. Creative implementation or creative coding had, in the past, only given us spaghetti code we cannot eat. Neither is it delicious to look at, read, or maintain. You now use your languages’ constructs and tools and race to the finish line.

And there it is! You have created something useful from nothing at all. What greater joy could there be?

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Monday, April 13, 2009

Ease of Use or Flexibility


Ease of use and flexibility are design goals that sometimes do not go hand in hand. In those rare instances that they do, the programmer, of course, has to provide both ease of use and flexibility.

Apple managed to combine both ease of use and flexibility in their MacOS X System. Their Aqua interface gives users ease of use while the core Unix system provides flexibility a power user might want. This is no mean feat and that’s one of the things that make the Mac a great platform.

But there are times when those design parameters do not play well together and trying to provide both would actually result in a program which is neither flexible enough nor very easy to use. In these instances, the programmer has to choose between ease of use or flexibility. If you are the programmer, what will you choose?

This is when the intended users of the application come in the picture. If your intended users are professionals or would use your application professionally, then features and flexibility should come first. If your application is for home users, then ease of use should be the top priority.

A good example of this is Picasa for home users, and Photoshop for the graphics professional. Picasa is easy to use while Photoshop is flexible and feature-rich.

User targeting should always be considered in application development to avoid those applications which try to be all things to all users and, as a result, become something none wants to use.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Friday, April 10, 2009

More Effort Should be Made in Developing, Improving and Using Frameworks or Libraries


Each year, hardware is getting faster and faster but the state of software technology is not keeping up.

Sure, software system requirements are also increasing. We need more graphics capability, disk space and memory (video and system) to run newer programs. However, this does not automatically translate to a corresponding increase of features in more or less the same magnitude. You can always write a more bloated program, if that’s what you like but an optimized and efficient code is more difficult to develop.

Faster hardware and roomier primary and secondary storage allows us to add more features to our software; but to do this without introducing bugs to supposedly stable versions, we need to take a hard look at frameworks.

Frameworks take code reuse a step further. It not only allows you to do away with recreating the solution to a problem which was already solved elegantly, but also provides a guide to the overall architecture or structure of the application. This makes applications more robust and more maintainable.

Fortunately, there are many general and special purpose frameworks available today. We have the Java Native Interface, Rails and Zend Frameworks to name a few. I’m just not sure how often these frameworks are used in modern software projects.

The open source community could also benefit with the availability of frameworks because more complicated applications can be done sooner. With frameworks, open source programmers could better compete with proprietary software companies with deeper pockets.

Frameworks are an important part of a modern developer’s toolkit. It would be nice if more open source frameworks would be available for more specific problem domains in addition to current general purpose frameworks. This will help a lot in the development of software.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Wednesday, April 8, 2009

The Difficulties of Web Design and Development


Some people are more creative than they are logical. The right hemisphere of their brains are more active than their left. For these people, creativity comes more naturally than logical analysis.

Tasks such as graphics design or desktop publishing are where these people shine. They perform naturally and can easily succeed in this type of environments. Analyzing computer code though, would be like reading Greek to them.

On the other end of the spectrum are the logical types who have a more active left hemisphere. Logical reasoning for them is natural and it never cease to amaze them how others could be so illogical.

These individuals are drawn toward mathematics and programming. They find it easy to break complex instructions into simple steps that the computer can understand. But give them a graphics tablet and a computer running Photoshop and they’ll be staring at the screen for a loooong while.

So, you have righties (right-brained not right-handed) whom you call if you have need of design and you have lefties whom you call if you need some programming done. So far so good.

But whom do you call if you decide to put up your website, the rightie or the leftie? If you say the rightie, then the design would definitely start good but the client and server-side script might be a bit of a mess.

With a leftie, your site will surely function but I’m not too sure about the typography, layout, line spacing, graphics and color combination.

The truth is that Web design and development is difficult. Not because it is inherently difficult but because it requires both creative and analytical abilities which few people posses. Most people posses one or the other.

Designers can learn some programming and programmers can take lessons in design but those are not their respective natural habitat. Man, for example, can learn to swim but can never hope to imitate the grace and speed of a dolphin. Some amphibians can move around on land but not as long or as efficient as land-dwellers.

If you are part of a team, this might not be a problem. The design and development tasks could be split among two or more persons. But if you are a freelancer or a one-man team, this is going to matter a lot unless you are one of the few who have balanced brain hemispheres (yes, there are).

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Monday, April 6, 2009

Web Browser Security


How scary is this: “IE8, Safari and Firefox All Fall in Hacking Test”?

That means, if you’re using any of those, your security can be compromised. “But that’s most of the browsers out there”, you say. True. The message, actually, is: nobody is safe.

No system is really 100% secure or unbreakable. Some just make it harder, not impossible, for intruders to break in. As they say, once you’re plugged, you’re a potential target. And once you fall for social engineering strategies like giving info or running a downloaded script or program, you’re owned.

But on the brighter side, not all system intruders are as talented as “Nils”, the winner of the CanSecWest Pwn2Own hacking contest where the above browsers were cracked. Nor do most have all the time in the world.

So, by taking obvious precautions like updating your anti-malware regularly, turning on your firewall and network address translation, if you have one, you’ll be making it more difficult for ordinary intruders and script kiddies to break into your system to the point that they might just give up and look for other easier targets.

Just remember to be careful with what you download. Some software, specially the cracked and pirated ones, may contain malicious code. Running these would expose your system to anything from simple (but annoying) spam to dangerous exploits and anything in between.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Friday, April 3, 2009

The Conficker Worm—Brilliance, Misplaced


The creators of the Conficker worm are clearly smart and clever. The worm has not only been infecting millions of computers, but is also spreading at an alarming rate. And while nothing spectacular happened on the first of April when it was scheduled to show force, that does not mean that that’s the end of it.

Lately, the Conficker worm had an upgrade. It now tries to connect to 50,000 internet addresses instead of just 250 like it did in the past. It also has a new peer-to-peer capability for updating itself and for other purposes. Doing all these take a lot of time, energy and intelligence on the part of the worm’s creators.

But such resources could have been used in better ways. There are still a lot of improvements and innovation that can be done in IT. So, why focus on destruction when you can apply the same talent to help fuel progress? Which do you think is better, spending a year developing a worm or the same amount of time and energy developing something useful and fun, like Mr. Nasser’s GeShout?

I think the answer will depend on how you want to be remembered. This is specially true in this day and age when Internet archives transcend both time and space. News articles and posts in blogs, forums, newsgroups, etc., will be available for quite a long time to come. It is also searchable and can be accessed from anywhere in the world.

From what I can remember during my childhood days, everyone wants to be the hero and no one wants to be the villain. When we role-play, all of us would like to be Batman or Superman and such. All of us ran around pretending to be the masked crusader. I just wonder at what point, in a child’s development toward adulthood, did it became OK and acceptable—and sometimes, even fashionable—to be an Emperor Palpatine and go over to The Dark Side.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Wednesday, April 1, 2009

Internet Explorer is Back with a Vengeance


The new incarnation of Internet Explorer, IE 8, was just released and a copy has just found a place in my system. This new version is said to be faster than the previous version and, according to Microsoft, the competition. I have yet to test the veracity of this claim but IE 8 does have a lot going for it.

As someone interested in Web development, my primary concern is standards compliance. As I mentioned in a previous post, IE 6's non-compliance discourages a lot of developers from taking advantage of newer coding techniques and new XHTML and CSS features.

These developers fear that viewers using IE 6 might not be able to access their site if it uses a lot of new features which are beyond the old browser’s comprehension. This hinders the Web's overall progress. Had IE6 been compliant or had it not existed, developers would be free to push the limits of the Web to the benefit of all users.

According to Microsoft's John Curran in his interview with TechRadar, that concern was addressed in IE 8. Based on what the guy said, it would seem that the IE team is really trying to make IE adhere to Web standards.

I don't doubt his words considering that Microsoft's Expression Web, a component of MS Expression Studio, really is more oriented toward standards compliance compared to FrontPage, the product it replaced. In fact, MS Expression Web is a serious contender in the visual page authoring category currently dominated by Adobe Dreamweaver.

The problem why IE 8 cannot be as compliant as Firefox or Opera is that the IE team have a commitment to support enterprise legacy users. Understandably, they cannot just leave these users hanging. Firefox and Opera have no such worries. They are free to innovate. For this reason, I will still be staying with Firefox. But the IE guys really did the best they can do given the restrictions they find themselves in. For that they deserve some credit.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Monday, March 30, 2009

New Technology Meets Old Technology


An old technology for catching fish could be observed with the aid of Google Earth. This fish trap, believed to be 1000 years old, is a giant array of stones formed in the shape of a Vee.

Fishes are supposed to get caught in the structure as the tide flowed out. But due to it’s massive weight, it has been sinking into the sand and in its current state, the trap is not as effective anymore.

Since it is now submerged deeper than at the time it was built and since it is a big structure, you cannot easily observe this if on the water. But with the help of an aerial view like that from Google Earth, the outline could be seen clearly.

Perhaps there are other structures which would just look like natural formations if seen on land but would reveal otherwise if seen from Google Earth. With this tool, a lot of people could try amateur archaeology at the comfort of their homes.

Sadly, Google Earth is not only used to make beneficial discoveries like this or to search for evidence regarding the lost city of Atlantis. It is also used in other mischievous ways such as looking for lead roof tiles to steal.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Friday, March 27, 2009

Substituted Service of Court Process via Facebook


It seems that service of court process via Facebook has now been accepted as an alternative mode of service in New Zealand. This illustrates that case law is dynamic and alive, and is in touch with the current trends in society.

I was pleasantly surprised by this development. Pleasantly, because that's the way case law should be—evolving and relevant. Surprised, because the legal profession is not the most enthusiastic when it comes to embracing new technology.

There are fundamental reasons for this hesitation to embrace new technology. The legal profession is traditional in a lot of aspects. For instance, the language commonly used in court and in pleadings and contracts contain a lot of Old English phrases and terminology. Usage is also distinctly archaic though there are talks of changing this.

This traditional image can also be perceived from the layout of a court room to the pillar and scales logo of almost anything connected to the profession. Almost anything connected to the legal profession shouts of adherence to tradition. (which might include the traditional typewriters :) )

In addition, unlike most other professions, the legal profession is essentially backward-looking. Medical and engineering practitioners are forward-looking in the sense that they focus on what could be done instead of what has been done.

Doctors, for example, would try to discover new ways of combating illness and engineers would try to discover new ways of producing stronger polymers and alloys. Legal professionals, on the other hand, whether members of the bench or of the bar, routinely look back at what has already been decided keeping the doctrine of stare decisis in mind.

But times change and the legal profession has to keep up. Moreover, people should not be allowed to circumvent the law and escape legal procedure when modern technology is available to prevent the same. This new interlocutory order is a step in the right direction.

Nobody should be allowed to willfully disregard legal procedure by actively evading conventional service then cry foul when substituted service is made.

Anyway, if someone does not want to be served via Facebook, they always have the option to update their current address in the court records so that court process could be served in the conventional way. Failure to do so stinks of bad faith. No court would allow the service of a process through the Internet or even through substituted service when ordinary service would suffice.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Wednesday, March 25, 2009

The Star Wars Anti-Missile Program Might Be Fighting a Different War


The smart guys and gals behind the 1980s missile shield are attempting to build a Weapon of Mosquito Destruction in the hope of winning a war which has been fought for quite some time now.

This war against malaria has silently claimed more lives than any other war had. It does not respect POW’s, it does not follow any rules of engagement and it definitely does not spare non-combatants—including women and children. I believe that this is the war we should be fighting in—a war against all mankind, not among them.

Of course, technology is at the heart of this war. This new weapon is supposed to work by locking-in to the sound generated by the mosquito’s wings and shooting it with a laser beam ala Death Star. Needless to say, a computer does the detection of the sonar source, the aiming and the activation of the laser.

I am just wondering how the technology would be implemented. Would it be packaged as an appliance which would sit in your living room and buzz the heck out of a squadron of flying mosquitoes, or would it be placed on board a satellite in low earth orbit ready to decimate mosquitoes to extinction?

Interestingly, Mr. Gates is funding the project. That means there is a big chance that Microsoft will be the company tasked to provide the embedded software or operating system of this device.

One final question, will there be a remote control reset button that goes with the device? You can’t simply go near it to press it’s reset button if it hangs and decides to shoot in every direction, you know. With the remote, at least you can reboot it behind cover. :)

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Monday, March 23, 2009

The Operating System Wars


There are endless exchanges around the internet about which is better—and a lot of it is either of the Windows versus Linux or the Windows versus Mac variety. Reading through all of those arguments could keep anyone busy for quite some time.

The answer is actually quite simple and I think everyone involved already knows it: It all depends on the task you are trying to accomplish. As to why the question still gets asked despite the universality of the simple answer is a question to which I have no simple answer.

It is imperative that you must determine your task first before you decide on a tool to use; otherwise, you might end up using an industrial chainsaw to cut your grass instead of simply using a lawnmower.

The failure to define a task first is one of the main reasons for a lot of those disagreements. A participant insisting that Windows is better and another insisting that Linux is the best could both be correct as the former could have been referring to Windows’ prowess in developing .Net applications while the latter could have been referring to the openness and flexibility of Linux.

Once the task has been defined, selection becomes easier. Say you want to edit html code and text with BBEdit or TextMate, then use a Mac. If Bluefish or Quanta Plus is your thing, Linux is the way to go. Finally, if you’re all for UltraEdit, TextPad or EditPlus, Windows is where it’s at.

Well, you could say that it’s not that simple all the time and you’d be right. However, your options will at least be clearer and your arguments, saner once the elusive task has been properly defined. But then again, where’s the fun in that?

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Friday, March 20, 2009

Twitter is One Hot Social Networking App


This is not news. Twitter is really one of the hotter apps in the Internet today and a lot of people I know are happily tweeting away.

But with a limit of 140 characters max, it is a great limitation for me. It’s not a bad thing for sure specially by judging the number of its users. It’s just that I personally have difficulty expressing my ideas in a small number of words.

While I’m learning to write shorter posts lately, I’m not yet very good at it. If you browse my archives, my older posts seem to be a little bit on the longish side. I always need a word count tool to remind me when the time comes to ease up on the keyboard and prune some words.

It really is true, at least for me, that a shorter post does not necessarily take a shorter time than a longer post. My posts normally start out long and I have to take additional time to make it shorter. But that’s if I am not in a hurry.

The French mathematician and theologian, Blaise Pascal had it right when he said: “I would have written a shorter letter, but I did not have the time.” This thought is echoed in various forms by other notable men such as H.D. Thoreau, Voltaire, Augustin and Mark Twain (you can read about it here). I guess this guys would not have been a fan of Twitter also even had Twitter existed during their time.

Maybe, once I learn the art of keeping it short, I could follow y’all in Twitterland.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Wednesday, March 18, 2009

Megapixel Count Is Not an Accurate Determinant of Digital Camera Quality


In most engineering and scientific applications, quantitative analysis almost always take precedence over qualitative analysis. The latter only takes a supporting role, if ever, in decision making.

Part of the reason is that qualitative analyses or tests are usually subjective and varies from person to person hence, it does not offer a very solid foundation from which to base decisions on. It is better to base decisions on something which everyone has a common idea.

If you are a chemical engineer tasked to procure a pump for, say, corrosive liquid transfer, you’ll find it easier to defend your choice if you have the metrics. Suggesting that pump A will increase your production capacity “because it has an output of 50 gallons per minute more than the other option” is better than “because it generates a louder sound hence, it must be pumping out more”.

Had you used the latter argument, another colleague of yours might say: “yes, the sound pump A generates seem louder but that of pump B has a higher pitch so pump B must be pumping out more”. Another might say that she does not hear any difference. Now you have a problem.

In digital cameras, however, it would seem like we are better served if we take a different approach and give qualitative analysis more weight than quantitative analysis.

The often used metric in determining the perceived picture quality of digital cameras is the megapixel count. But this is not accurate. More megapixels does not mean better picture quality.

Picture quality is more dependent on sensor size, sensor sensitivity and lens quality than on mere pixel count. But sadly, these qualities do not have convenient metrics which could be used for comparison; that’s why most people still use megapixels in comparing cameras just like they would use horsepower in comparing cars.

Marketing takes advantage of this and more models are released with more megapixels as the main attraction. Other companies are also forced to release models with more megapixels to keep up with the competition.

But there is a downside. A point is reached when adding more pixels would decrease picture quality instead of increase it because each individual pixel sensor will be smaller, less sensitive, and generally have lower quality than larger ones.

In effect, the consumers will suffer from this pixel count war. This has to stop. But before that will happen, consumers should know that pixel count is not the be all and end all in terms of picture quality. Then, they must be willing to trade pixel count for lens quality and other features which have more direct bearings on picture quality.

When the marketers realize this, they will hopefully change their focus to more substantial features. As it is today, competition among camera makers is harming consumers instead of benefiting them.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Monday, March 16, 2009

Lists of Text Editors


I am very much fond of text editors. It could be because the way I use a computer mostly revolves around creating plain text files or editing text for transferring to or further processing with other applications. Aside from Firefox, a text editor is the application I really need to have in my computer.

Since there are many different ways of working with text, no one editor can be all things to all people. Because of this, and the fact that text editors are relatively easy to make, there is an abundance of text editors in the Internet. Most of these are free or at least have a trial version.

Due to this abundance and my need to work with text, it has been a hobby of mine to collect and play with text editors. I would download new text editors or newer versions of those I already have and use it for a while. And like a game, I would rank them in my system and bump others up or down. Some get uninstalled.

My current favorites are TextPad and EditPlus. These are great shareware programs that deserve a lot of attention. PSPad and Notepad++ are not far from the list and these are free.

But the growing number of text editors make it hard to keep track of them all, much less to be informed of worthy new comers. Luckily, I found these lists:

So, for those of you who, like me, are fond of these useful tools, that should keep you busy.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Friday, March 13, 2009

Don't Mess With Dreamweaver


I found a controversial blog post in PC Pro. In that post, Tom Arah made a bold statement that Dreamweaver is dying. That made me say: "Dreamweaver? Dying?".

He supports his claim by saying: Dreamweaver is for static sites and static is out, dynamic is in; and it is flawed in posting content (which happens to be king) and making it easy to search. He ended it with the following: "Dreamweaver is dying. Long live Drupal."

Wow! That was heavy. After reading the post and browsing through the comments (there are a lot of them), I learned something: Don't mess with Dreamweaver. The exchange is also entertaining, in a different sort of way.

I am not saying that he is wrong or that he is right. It's just that there are better ways he could have said what he wanted to say if he only thought a little more carefully. I myself am a victim of that sometimes. (that's why I said I learned a lot)

Bashing Dreamweaver is a little bit like bashing Emacs or Vim. It is difficult to expect the [Emacs|Vim] user to respond rationally to such an attack specially when you consider the fact that they truly (and rightfully) believe that their tool is very useful and that they have spent a considerable amount of time learning to use it effectively. The same also applies to Dreamweaver except that the user not only spent a considerable amount of time learning the thing but also spent a substantial amount of cash for the license. Once you understand that, you will see the comment area of that post in a different light.

My own thought on the matter is that Content Management Systems like Joomla and Drupal can coexist with webpage editors, like Dreamweaver and Expression Web, and HTML editors like TextPad and EditPlus; and, no, you can't compare CMS with Dreamweaver just like you can't compare apples with oranges. There may be some changes in the roles Dreamweaver plays with the introduction of CM systems but CMS certainly does not render Dreamweaver irrelevant or obsolete.

Finally, We should make full use of all the tools that are available to us instead of bashing those which we do not use (specially if those tools have an army of loyal users). Instead of putting down things, let's look for new or productive ways to use them. Tools like Dreamweaver, Expression Web, Joomla, Drupal, TextPad and EditPlus help make our Web development tasks easier. As such, they are our friends. We do not put down friends—we use them. (kidding) :)

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Wednesday, March 11, 2009

Should Apple Continue Building New Generations of Mac Mini?


Every PC company like IBM, HP, Acer and Dell has an entry-level system. These systems are normally positioned in the lowest rung of the ladder in terms of cost and, as a result, features. The idea is to build a low cost system in order to lower the consumer’s barrier of entry to a company’s product line.

Like the others, Apple also needs to have, and in fact has, an entry-level system. Currently, it is the Mac Mini. However, I would venture to say that in the case of Apple, the Mac Mini is an unsuitable choice for their entry-level system.

Sure, the Mac Mini is the cheapest system Apple can make which would allow new users to see for themselves what the Apple camp looks like and which would hopefully win them over. On the surface, that seems reasonable enough but looking closely, it is flawed.

The Mac Mini does not really allow users to experience what it feels to own a Mac which is more than simply running MacOS X. A big part of it is the pride of owning something beautiful sitting atop your desk and feeling the elegantly designed keyboard and mouse.

Picture this: I have a PC system. I interface with it using a generic keyboard, a generic mouse and a boring LCD. The ugly gray box of a CPU case is hidden somewhere under my desk or my computer table.

I hear all about the elegance and ease of use of the Mac and decide to try it. I go for the entry-level Mac Mini,  of course, as I’m not sure if I’m going to stick with a totally different system which runs a totally different set of software. I then integrate it with my current setup by replacing my old CPU box. I would probably put it in the same place where my old gray box had been or maybe behind the LCD so as not to clutter my workspace.

Sure enough, when I boot it up, I am greeted with OS X instead of Windows. Then, I start to use it with the same mushy keyboard and stare at the same old boring monitor. By the way, the mouse still skips. I still interface with the same three out of four components.

The difference is just one component—and a small one at that. Moreover,the replaced component had not even been much visible to me anyway since it had been hiding under my desk. So, practically nothing really changed aside from the new OS and the fact that I cannot run my old programs. But a well designed OS, such as OS X, is not supposed to get in the way between you and your applications; so I don’t notice the OS too much either except during booting up.

Now answer this: Had the Mac Mini given me a reasonably Mac-like experience like it should have?

You can say that I could buy Apple components to go with my Mac Mini, but since Apple does not have entry-level monitors, I would end up just slightly less than the next higher level system. Not very good for an entry-level.

Apple themselves had been emphasizing that they are a hardware company, not a software company and their choice of an entry-level system should reflect that. The current one does not. The statement it makes goes something like: “Our new product is Mac OS X Leopard. You can try it using your current setup if you just buy this neat add-on to your system—the Mac Mini.” It would seem like the Mac Mini is just there so that you can try the MacOS X. Definitely not the message they want to impart.

If that is all their entry-level system can do, how could they compete with the currently popular practice of installing a legal copy of OS X in a regular PC? (Yes, it’s doable, though somewhat within a legally gray area; and the procedure is floating around the Net for a while now.)

I believe they got it right in the past when their entry-level system was the eMac—and the iMac before it (not the current iMac). Those systems are full-on Apple systems that give you a more complete Apple experience at a low price.

Apple should discontinue further production of the Mac Mini and should instead build a system similar in concept to the eMac and the iMac before it.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Monday, March 9, 2009

Linux on Your Desktop? Why not?


When most people use their Windows computer, it is usually for a small set of applications only. Not all computer users treat their computers as a laboratory of sorts where a lot of programs reside for testing and experimentation. For most of them, the computer is just a means to an end, a tool to get some job done--like getting a date on Facebook.

On top of this list of applications, most probably, is a Web browser for browsing the Internet and updating their Facebook profile. Another application might be a mail client for checking mail from POP and IMAP servers, though others might just use their browsers specially if they happen to use web mail. Some would also want an instant messaging client running in the background just to make sure they don't miss anything. Finally, there is the word processor and spreadsheet for the more mundane tasks (read: work).

If this type of user describes you, then I have some great news for you. You can do all of the above (and more) with Linux! In fact, even if you work with video or animation, are a hardcore gamer, or a graphic artist, Linux can still work for you. There are some animations nowadays that are being created using Linux boxes, and the Wine project makes games like Counter-Strike and others available on Linux. There are also alternatives for those who do graphic designs and desktop publishing.

But let's go back to the casual user, which comprises most of those using a computer. For all of you, Linux is a viable alternative. The only question now would be: "If I can do the things I usually do in both Windows and Linux, why would I switch?".

Well, first of all, everyone wants to be free and in the software world, Linux is freedom. There is no vendor lock-in in Linux. You are free to use whatever technology you want to use. In the Windows world, some things just go well together—specially if they belong to the same company or a deal between them is in place. Microsoft products, for instance, work well among themselves but does not play nice with other technologies.

Forced upgrade is another problem you do not have to deal with in Linux. When Microsoft decides to stop supporting XP, what would you do? Shell out more money for the latest Windows version, of course. In Linux, you can upgrade the whole thing, only the applications you use most, only the kernel, only security patches, or anything in between. You are completely under control.

Software quality and security is also great in the open source world. With all those developers having access to the source code, all bugs are shallow. Patches and updates are also issued at a faster rate compared to commercial software thereby ensuring that you are running the latest and most secure version.

Finally, there is the cost. Most, if not all, of the open source and free (as in freedom) software are free (as in cost=$0). You can't beat that, specially in this very challenging times.

These are only a few of the reasons why making the switch to Linux is such a good idea. You can use Google to find a whole lot more. One thing is sure; as a normal computer user, Linux can take you to places you never thought existed. And by the way, while I was finishing this post, my Digg toolbar notified me of a new article in Digg. It is titled: "25 Reasons to Convert to Linux". How timely; I might as well take you there.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Friday, March 6, 2009

Does a Graphical User Interface Really Makes Applications Easier to Use?


A graphical user interface or GUI is almost synonymous to user interface these days. When somebody talks about a user interface, such is almost always automatically presumed to be graphical.

Believe it or not, there was such a thing as command line or text-based interface before the advent of GUIs. That was the type of interface I was accustomed to while still in college. And believe it or not, some people do find them easier to use than graphical interfaces. This is specially true once you know the application’s commands and options.

I even remember the first time I had my own PC—it was a 486 IBM PS/Value Point. Windows 3.1 was hot during that time and came pre-installed in that PC. Since I was used to the empty black screen and blinking cursor of DOS at school, the array of graphical elements—icons, menus, windows, etc., in different colors—made me feel somewhat dizzy.

The first time I was greeted by the graphical Program Manager, I do not know what to do. Had it been DOS, I would have typed turbo or tp when I wanted to run the Turbo Pascal 7 IDE or typed clipper when I wanted to compile some Clipper source code. But with the Program Manager, I do not know what to type and where to type it—there’s no blinking cursor where I know the input should be entered. Then I learned that I have to use the thing beside the keyboard called a mouse.

Now, I am quite comfortable using the mouse for some pointing and clicking, and dragging and dropping. Maybe a little too comfortable that I am beginning to forget the esoteric things I used to do with the command line. But I still miss the speed and efficiency of using the keyboard instead of the mouse for some tasks.

If you are new to an application, I would heartily agree that a GUI would allow you to make use of its basic functionality more quickly. Once you see a scissor icon on a toolbar button, for instance, you immediately know that it has something to do with cutting an object which you have previously selected and storing it in the clipboard. Had that new application been text oriented, you might not know how to start or what command to use until you read the manual—you might not even know the command-line options and parameters to throw; but that’s what the /? and –h switches are for, right?

But once you are already familiar with the application and its features, sometimes the graphical interface gets in the way. I don’t know if you also feel that sometimes. If you do a Ctrl+C and a Ctrl+V instead of going to the menu bar, clicking edit, bringing the mouse down slightly, clicking copy, moving the mouse to the insertion point, going back to the menu bar to click edit again, dragging the mouse down again and finally clicking paste, you know what I mean.

GUIs are also helpful for applications you do not use frequently because you tend to forget most of their functions. Its nice to click a menu option and see a list of what you can do.

This could be the reason why editors like Vim and Emacs are still quite popular in this day of GUIs even if they rely more on user commands rather than point and click functionality. Users of these systems use them extensively (partly because you can use them with practically anything—text processing, source code editing, mail, as an HTML editor, etc.) and hence become very familiar with their commands to the point that it is much more quick and efficient for them to type the commands directly and that a GUI would only get in their way.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Wednesday, March 4, 2009

What Makes the Mac Great


The Mac has been known for its elegant and easy to use interface. For that reason, creative types like layout designers, writers, desktop publisher, etc., are drawn to it.

Ease of use, however, has a downside. Some of it are achieved by reducing the number of choices or options a user can make. The idea was: less options, less confusion.

This works fine for those with a definite set of feature requirements because once those feature set are provided by the system or its applications, the user is satisfied.

But there are those users who need, say, feature A for determining network connections and a slight variation of the same feature A for checking FTP logs, and so on. The developer cannot predict the different variations this type of users need for feature A. The result is, either the developer tries to implement all conceivable variations of feature A, which would result in bloated software, or the developer would only implement a small set of the variation. In both cases, the end result is sub-optimal. This type of users need more flexibility than ease of use.

For these users, a flexible system is necessary. While this need is satisfied by Linux, FreeBSD and other Unix-like systems, some people cannot stay with these systems alone for long. Even though they needed the flexibility offered by these systems, they do not need that degree of flexibility all the time. Sometimes, they also do mundane tasks with definite requirements which, while doable, is unnecessarily difficult using these systems.

There are two possible solutions for this: the first one is to buy a Mac, and also a PC with Linux or FreeBSD; the second is to configure a dual-boot PC which allows the user to choose either Linux or a more familiar MS Windows operating system.

Obviously, the first option is only for a fortunate few. Most would go with the second option. During boot time the user would choose either an easy to use system or a flexible one. Seems good on the surface—until you decide, in the middle of your job, that you need to work on the other operating system. This would require a reboot and and choosing the other system from there. This is okay if you only have to do that like once or twice a day; but if it is more than that and your boot time is not fast enough, it will start to become a hassle.

But with the introduction of MacOS X, these problems vanish in thin air. Like the previous Mac Classic, it still has the elegant and easy to use interface Mac users come to know and expect. At its core, however, is Unix. It is as easy to use and as elegant as all its predecessors but has the flexibility only Unix (or Unix-like systems like Linux and FreeBSD) can offer. In short, this is the best of both worlds.

So, if your PC is currently configured to dual-boot Windows and Linux or FreeBSD, the Mac could be the best platform for you. Your friends will gawk at that beautiful hardware too.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Monday, March 2, 2009

Netbooks: The Best Tool for Writers and Bloggers


Netbooks may not be the most powerful machines out there. The storage space that comes with them is small and their processing power leaves much to be desired. But they do have a lot to offer—specially for writers and bloggers. The price of these gadgets are also very attractive.

If you really think about it, most gadgets build up on their predecessors. New laptops, for instance, sport higher memories and storage capacities, faster processors, brighter LCDs, etc., than the previous models. Netbooks, on the other hand, seem to go the opposite way. The last time I checked, most have half the secondary storage capacity of regular laptops.

Fortunately, writers and bloggers do not really need the fastest processors and the roomiest hard disk. The slowest processor of even ten years ago is way faster than your average typing speed specially if you factor in the time you spend thinking of what you need to write. Fast computers simply waste processor cycles waiting for your next keystroke.

Hard disk capacity is also a non-issue for most of us—even if what you are writing is a thousand-page book. Plain text, which is what most writers work with most of the time, does not take up much space—even if you include the Docbook or Latex markups (or whatever markup you use).

The software we use are also not hardware intensive. Most of the time, we only need a browser, a text editor, an email client and a blog client. You can install these on a hard drive with modest space and they don’t require much memory to run.

You can even get by with just a browser if that is what you want. You can post entries to your blog with its web interface. If that does not appeal to you, there are a lot of online text editors which you can use to draft your post or your manuscript like EditPad, FCKeditor, and Google Docs. The latter two can be used for more than text editing. (Personally, I’d rather have my stand-alone text editor.) Most email services could also be accessed online without a dedicated desktop client. With these applications, and references like an online thesaurus and dictionary, you only need an internet connection and your netbook. It truly is a Net book.

By giving up features you’re not going to need much anyway, netbooks are able to offer you something you do need—a small size and the right price. In these financially troubled times, the sticker is not something to be easily ignored and that of the netbook is refreshingly affordable.

Believe it or not, the size of the netbook can enhance your creativity. You can bring it with you in the park, the garden, the beach or anywhere your creative juice flows freely. Changing your working environment can sometimes get you through blocks and working in the same corner every time is not the most conducive way to write. A netbook is easier and more convenient to transport than a full-size laptop.

So the next time you write a novel in which the setting is in Italy, you might find yourself there, sitting in a sidewalk cafe, netbook open.

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Friday, February 27, 2009

The Decline of Handwriting Skills: A Tragedy, or a Sign of Progress?


I just found an article about the slow death of handwriting while surfing the web. It really caught my attention because, while not mentioned or even hinted in that article, it would seem that technology could be one of the major causes of the declining relevance of handwriting today, and I am fond of both technology and writing—of which, handwriting is a natural part, whether by history or convention. In fact, I recursively use technology to write and publish stuff about technology and publishing (blogging).

Learning the art of handwriting is a demanding undertaking. It takes a lot of patience and practice to be proficient in it. Technology, on the other hand, promises desktop publishing even to hunt and peck typists.

In less than half a day, the average adult can be taught how to turn on the computer and fire up notepad. After that, she can type (even with both index fingers only) and print a simple letter to her boss. But if she does it by hand and her handwriting is indistinguishable from hieroglyphics, it’s going to take a lot longer than half of a day if she does not want her boss to take the letter as a death threat from a mummy.

Since learning the art of handwriting is tedious and rendered unnecessary due to the ease with which technology can duplicate the results, there is less motivation to undergo the torture—specially if it involves one’s hand being slapped with a long ruler. In addition, time spent learning the intricacies of holding the pen in just the right angle is time lost for learning more employable and relevant skills—like typing and how to use OpenOffice.org. So, most schools would rather focus on the latter.

Clearly, technology is one of the many culprits—if not the master-mind—of handwriting’s fall from grace. The question is: Is this a good thing, or not? Also, should technology be punished for the fate of handwriting, or applauded for the productivity it offers in exchange? Finally, should handwriting still occupy the same status in our schools as it did in the past, considering the amount of new materials to cover and the limited time? This is for each individual to consider. For me, the answer is clear.

Personally, I really like to write. Not just the writing one does when creating content but the physical act of writing itself, regardless of the product or result. There is so much to be said about the feeling you get when a good pen slides and glides on a piece of paper. It's like some kind of moving meditation by which you are disconnected from the whole world for a moment and you, the pen and the paper become one. This I do all the time when the need to relax or to clear my mind arises. I write random words, small loops, big loops, diagonals, and curls. What I write does not matter; only the act of writing does. It is a wonderful exercise that helps me feel better, improve my handwriting and reuse some paper which should have been due for the recycle bin.

There is another benefit of writing by hand. With it, you can draft the most personal message you can send somebody. Sending a handwritten message sends a message of its own—that of sincerity and importance.

The significance of a handwritten document is also not lost in the legal world. In some jurisdictions, a handwritten document takes precedence over typewritten or printed ones. In testamentary succession, some jurisdictions do not require a will to be notarized if it is in the handwriting of the testator (a holographic will). This speaks a lot about the weight given to handwritten documents and instruments.

However, while the importance and benefits of writing by hand is certainly acknowledged, the increased productivity you get with the aid of technology cannot be simply ignored. There are just a lot of things you can do better with a keyboard and a text editor than with a pen and paper.

When time is of the essence, a fast writer will find it very hard to match the speed of an average typist hammering on a Dvorak-layout keyboard. Finding mistakes in a word processor or text editor and correcting them is also a little less messy than with the pen and paper counterpart. In this Internet age, it becomes easier to integrate research and writing if you do it with a text editor. And when we add spell checking, search and replace, copy and paste functionalities, etc., it becomes a no contest.

Another thing is, and I think this is the real reason for the impending demise of handwriting as people knew it in the pre-60's, you need an inordinate amount of time to learn to write something as presentable as a computer generated document—time better spent learning to touch-type. For this reason, most people will not bother to spend more time with their handwriting above what is required to be reasonably legible. They can be more productive in a much shorter amount of time with a keyboard. Sure, people will still write by hand and it will still be taught in schools—but not in the same manner and emphasis as in the past.

I already confessed that I love to write by hand. But since it is not practical for the tasks that I commonly face in this day and age, I became a keyboard warrior, practiced mouse-fu for my daily tasks, and wielded the pen only for leisure, a short note and the oft-cited example: to-do or grocery lists. Luckily (or perhaps, unluckily) for me, I have already developed a reasonable level of handwriting skill to be able to use it without requiring the reader to have an advanced degree in cryptography. But for those still in school, I am not surprised and do not blame them if they de-emphasize handwriting in favor of typing skills. After all, nobody ever shed a tear when the slide rule was replaced by the calculator, or when travelling by foot was replaced by speedy and convenient transportation, or when the telegram was replaced by text messaging then by twitter, or when...

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Wednesday, February 25, 2009

Does Writing and Programming Mix?


Since I sometimes write and also do some programming, I began to wonder if there really are any significant differences between the two activities.

On the surface, it would seem that the two are very different. Programming is usually associated with logic and hence, with the left hemisphere of the brain. Writing, on the other hand, is normally viewed as a creative endeavor which taps on the right hemisphere just like other things creative.

Studies show that creative people have a dominant right hemisphere and logical people have a dominant left. Since one side is dominant than the other in both types, does that mean that writers can only write and not program well, or that programmers can only program and not write well?

I think both are really just different sides of the same coin just as Slackware and Debian are different packages of the same Linux kernel and GNU components.

Writing is not purely creative. You need a dash of logic to put structure in what you write. Programming is not pure logic either. Most of the brilliant hackers (think Dennis Ritchie, Ken Thompson, Richard Stallman, Linus Torvalds, Bill Gates, Steve Jobs, Eric Raymond, etc.—not network intruders or system crackers) are known to be very creative people. And in programming, sometimes only creativity can get you out of a situation where logic and reason deem hopeless.

Both of them even have the same elements. Communication is the raison d'être of both. In the case of writing, you are communicating with another person—your readers. In programming, you communicate with the machine. Both also use language to communicate. This could be English or Spanish for writing, and Lisp or Smalltalk for programming. And just like any other languages, programming languages also have grammar; they just call it syntax. They even have their own elements of style.

I like to think of programming as “writing” a program and do it just like I would do any writing project. There’s still that planning phase before any writing gets done, of course. The only differences are the language and references I use and my target audience. Instead of a dictionary, I reach for the language’s class or function reference—I even use a text editor for both.

I believe that writing a program is like writing a book (programs also have publishers, right?). A small program equals a small book and a large program equals a large book—or a series of books. I could be wrong, of course, considering that I haven’t written any significant program (or book) worthy of SourceForge or Google Code. But then again, maybe the next hottest programming methodology will be called Persuasive Programming wherein you aim to persuade the computer to refrain from giving you headaches and to just do whatever it is that it should do.

What do you think? Could it be that we got it wrong when we correlated programming, which deals more about the abstract and conceptual, to building construction which is more about real and concrete materials? Should programmers, then, be called Software Writers rather than the current and more macho title of Software Engineer and Software Architect? Or shouldn’t writing and programming mix?

Stumble Upon Toolbar Add to Technorati Favorites Delicious Add to Mixx!

Posts You Might Be Interested In