Wednesday, October 3, 2007

VPS memory explanation

We noticed that some customers are somewhat confused about the way memory is allocated. A linux VPS allocates RAM the same way as any other linux environment. The biggest confusion previously was related to allocated(reserved) memory and assigned(used) memory.

Privvmpages
  • Soft limit
    This limit defines the maximum amount of memory that your VPS can allocate. This limit is usually set to 262,144 pages. 1 page is 4kb, so this limit is set to 1,048,576kb which is equal to 1024Mb (or 1Gb).

  • Hard limit
    This limit is typically just slightly higher than the soft limit, to make sure that your VPS wouldn’t die in case the soft limit is reached.

  • Current use
    This is the amount of memory that your VPS has currently allocated (note: allocated ram basically means “reserved” ram – it is not all actually being used).

Oomguarpages
  • Soft limit
    Despite what the name of this parameter implies, this number isn’t a limit, but a guarantee. This number is the guaranteed ram that’ll always be available to your VPS, no matter what happens.

  • Hard limit
    This parameter is always set to 2,147,483,647 – which basically means “indefinite”. In other words: this parameter isn’t being used by anything and can be disregarded.

  • Current use
    This is the amount of memory that your VPS is currently using.

RAM pages
In a 32bit environment, ram is always used by 4kb pages. A page is basically a "block". In order to convert something from pages to megabytes, you multiply by 4, then devide by 1024 (to go from kilobytes to megabytes). For example:

65536 * 4 / 1024 = 256

In other words: 65536 (4kb) pages equals 256mb. So if your oomguarpages soft limit is set to 65536, that means you have 256mb guaranteed RAM.


Current use: allocation vs. actual usage
As you can see in the above description, the current use of the privvmpages represents how much RAM your VPS has allocated, and the current use of the oomguarpgaes represents how much RAM is actually being used.

Now you might wonder; what's the difference between allocation and actual usage? Allocation basically means "reservation". For instance when you run a webserver, it might allocate 50mb ram but only use 20mb of that allocation.

Your RAM guarantee applies to the actual usage. For instance if you have 256mb guaranteed RAM, your VPS can safely allocate 400mb if the actual usage is less than 256mb, since the guarantee applies to the actual usage (e.g. it simply doesn't matter how much ram is allocated).


Burstable RAM
By now it should probably be clear what guaranteed RAM is. But what is burstable RAM? Burstable RAM is the memory that's available beyond the guaranteed ram. For instance your VPS might have 256mb guaranteed ram, and 1024mb burstable ram. This means that after you have used up your guaranteed ram, there's still 768mb burstable ram available for burst usage - IF there's enough free memory on the host server.

We always leave some extra memory available in each host server as burstable ram. Additionally extra burstable ram is available if another VPS on the same server doesn't use up all of its guaranteed RAM.

Please do keep in mind that in the event a VPS suddenly needs its guaranteed RAM, that VPS will always get it. As a result, that may also mean that a VPS which is using burstable RAM, may get some processes killed in order to reduce its burstable RAM usage. As such, it is highly recommend to not rely on burstable ram except for peak usage. As a rule of thumb, you should always make sure that your guaranteed RAM covers your typical ram usage. For instance if you typically use 350mb ram, you shouldn't get a VPS with 256mb guaranteed ram, since you'd be using almost 100mb ram which may get killed off. Surely it may work just fine - but your processes are at risk that way.
..

Monday, September 3, 2007

Website Testing: Conquering Cross-browser, Cross-platform Woes

As I was doing final cross-browser testing for a redesign of SKDesigns, my website design business, the design implementation was working quite well in nearly every mainstream browser for Windows, Mac, Linux, and even the Lynx text-only browser. Unfortunately, though, I found problems with three old or little used browsers, such as Internet Explorer 5.2 for Mac that destroyed the CSS-positioned layout. I toiled over how to best handle these browser bugs, especially since my upcoming Web design book—currently in production at my publisher—stresses the importance of usability, readability, and degrading gracefully for older browsers. Today’s post covers part of my decision-making journey and choices of approaches for dealing with these CSS bug-riddled old and little-used browsers.

Basic Development Goals

First, my basic development goals for this redesign project:

  • Standards-compliant for XHTML 1.0 Transitional, CSS 2.1
  • WCAG-compliant (W3C’s Web Content Accessibility Initiative), preferably Priority 2 or 3, but at least Priority 2
  • Use of liquid widths, not fixed widths, for the entire layout
  • The visual display doesn’t need to look identical in every browser, even the latest browsers.
  • The visual display should look the best in the latest browsers and degrade gracefully for older browsers so that even users with old browsers can read, navigate, and use the website.
  • Use CSS workarounds or hacks only as a last resort, maintaining W3C validation, such as those listed at Peter-Paul Koch’s site, CSS Hacks: Safe List and his article at Digital Web, Keep CSS Simple.

Design and Development Approach

I kept in mind the above goals while I created the visual design. Here’s the basic skeleton layout upon which I created the design and markup:

skdesigns basic layout

As I also recommend in my upcoming book, I first developed the design for the homepage to validate to W3C Recommendations, which in this case was XHTML 1.0 Transitional and CSS 2. I then checked it with Opera 8 and Firefox 1.04 since they support CSS 2 the best at the moment. Once those worked, I checked it with Internet Explorer 6, finding plenty of problems due to several of this browser’s frustrating CSS bugs, such as the following:

  • Float problems, as explained at Position is Everything’s The Float Model Problem.
  • IE’s 3-pixel text shift problem, as explained by John Gallant and Holly Bergevin via Position is Everything’s The IE Three Pixel Text-Jog and via How To Attack An Internet Explorer (Win) Display Bug.
  • Jumping text when a link is hovered, as explained by Ingo Chao via Position is Everything’s Quirky Percentages in IE6’s Visual Formatting Model—my section navigation links jumped to the left on hover. In addition, within the main content area, text a paragraph or heading below the hover jumped up(!), but I suspect it’s for a similar reason. I think I’ve finally resolved both problems, in part by designating my
    containers with negative margins:

    #somecontainer{
    margin-left:-3px;
    margin-right:-3px;
    }

    The problem is totally resolved if I also add the same to the left padding:

    #somecontainer{
    padding-left:-3px;
    }

    The W3C validator doesn’t like negative padding even though negative margins will validate, however. I removed all but one negative padding designation, and I think the bug is still gone, but I’ll be doing further retesting.

I then checked it with Lynx (text-only browser) and Netscape 4.x. So far so good.

Checking Colors

In addition, I checked the visual design on several different displays to see how the colors looked on a variety of displays. On one computer’s display, the topmast’s heading background looked incredibly washed out rather than showing the rich colors that I had in mind. The colors looked as intended on my own computer’s display set to the sRGB standard. I went back to Photoshop and did some serious color revisions to try to better compensate for other displays.

When all that checked out OK, I then created a couple of internal pages and retested, repeating until I’d created all the pages.

Then a Print Style Sheet

At that point, I went ahead and created a simple print style sheet. As you’ll see in the Netscape 4 example below, the on-screen top heading’s logo image is a transparent .gif image that floats over its topmast area dark multi-colored background, but its edges appear jaggedy without its needed dark background, as I expected. I created a different version for print that works for a white background. I stipulated in my CSS to hide for screen and show for print, and likewise to hide the screen version in my print style sheet, such as:

In my screen style sheet:

#logoscreen {display:block;} /* screen logo */
#logoprint {display:none;} /* print logo */

In my print style sheet, the opposite:

#logoscreen {display:none;} /* screen logo */
#logoprint {display:block;} /* print logo */

I tested the print version by clicking on Print Preview in Firefox, Opera 8, and IE6, where it worked as expected. I didn’t check it in Netscape 4 at that point, though, which bit me later, as I explain below!

More Cross-Browser, Cross-Platform Tests

Finally, I checked several pages via BrowserCam, especially for Mac browsers, where I found frustrating problems:

  • Positioning problems in IE5.2 Mac make the pages difficult to impossible to read due to content floating over other content, such as the footer area with the bottom-of-page navigation and contact links that should rest at the bottom of each page. The bottom dark blue strip should span the entire width of the page, too. The visual display in IE5.2 Mac is not good.

    Explorer 5.2 Mac OSX 10 floats

  • The bottom navigation in IE4 Windows didn’t display as inline list items, in addition to floating up over the content area. The bottom navigation should be a fairly narrow horizontal navigation strip that spans across the bottom of the page, similar to the dark navy strip just below it, not the fat dark tan area with block list item navigation that’s rendered by IE5.2 Mac in the example below. There’s also the big white gap below the footer, too, that shouldn’t be there, of course! UGH!

    Explorer 4, Windows 98 bottom navigation mess

  • In Netscape 4.x, the “print only” version top-of-the-page logo graphic and the on-screen logo appear at the top of the page, along with no background for the topmast, so it looked horribly ugly:

    nn4.8 linux8 both logos

  • Konqueror 3.05 for Linux 8.0, moves the right column to the bottom of the page, overlapping the footer and making a big mess, to put it mildly.

What To Do About Old Browsers, Little-Used Browsers?

Next came deciding what to do about these problems. My bare minimum requirement is to be sure the site is still usable and readable in the above problem browsers. The above problems didn’t meet that, as shown in those screenshots.

Re-check Bug Lists

First, I thought I’d re-check bug lists to see if/what I’d forgotten to allow for that I hadn’t already covered. Some insightful online resources are:

Re-check Browser Stats

Next, I decided to check the browser stats for IE5.2.3 Mac, Konqueror 3.x for Linux, and the latest for IE4.x and Netscape 4.x. Even 1/2% or 1% using any of these still means 300-600 visitors to my business site each week who wouldn’t be able to read the content or navigate through the site, which is not OK with me. I wanted to at least meet the minimum.

At the same time, these browsers have plenty of bugs and oddities, and I really didn’t want to spend a lot of time with this or mess up my CSS for the most-used mainstream browsers.

To make sure the stats numbers in my head were still current, I re-checked my site’s browser usage statistics and other freely available browser stats and general trends. I especially wanted to find out the numbers of visitors using specific Mac browsers, and how the trends are going. I also know that stats aren’t totally accurate, so checking several sources gives me a broader picture, not just what my own site visitors use in any given week. Here are a couple of helpful resources:

  • Browser News by Chuck Upsdell, is one of the best places to find out the latest trends, stats, and summaries about them. You’ll also find links to plenty of resources for more information. I spent a little time reading the latest and following a few of the links to other stats.
  • Browser Statistics by Rendering Engine by John Haller culls stats from WebsideStory, OneStat.com, and TheCounter. As Haller states, “All three are imperfect, but together, they may cancel out some limitations. WebSideStory is very US business-centric. OneStat is more global. TheCounter is more geared to smaller sites.”
What are Other Current Opinions?

I also wanted to see what others are doing about these browsers, especially IE5.x for Mac and Windows. Here are some resources that I found helpful:

Figuring Out Practical Solutions

Given the numbers are so small and diminishing as the weeks go by, I decided to serve these old or little used browsers a visually simple website that’s readable and navigable, although it won’t have the visual design seen in current mainstream browsers.

First, I thought I’d try an approach to hide style sheets from IE5 Mac. That way I’d keep hacks and workarounds to a minimum within my style sheets. Here are some possibilities that I explored, the latter of which I chose to use for my site:

After I tested that filter, I also added another filter to hide my style sheets from IE3-5 Windows, too: Tantek Çelik’s High Pass Filter. The result in IE5 Mac and IE3-5 Windows is a visually simple one, but it’s now readable and usable. In addition, I didn’t need to add any more hacks within my existing style sheets. I can live with this result for such a small number of visitors, especially since those numbers keep shrinking.

ie5.23macafterfilter250

I created a simple style sheet for all browsers, but these old or little used browsers can see and use it without any harmful effects, including Netscape 4.x. The latest browsers can also use another more advanced style sheet that they can handle that’s hidden from these old or little used browsers via the filters. I might add more styles to the simple style sheet before I finish the redesign, but I haven’t decided on that yet. I can live with it like it is right now, too, especially knowing that those using these older or little used browsers can still use the site.

Along the way I found info on serving a style sheet only to IE5 Mac, for those interested in trying that. This is shown with a great explanation via Stop Design’s Doug Bowman at IE5/Mac Band Pass Filter:

/*\*//*/
@import "ie5mac.css";
/**/

Browser CSS Bugs, Hacks, and Workarounds

I’ve talked about hacks and workarounds a fair amount in this post, but I’m still a firm believer that it’s far better in the long run to create your style sheets without any hacks or workarounds first, and then only use them conservatively when deemed absolutely necessary. For example, you can do a lot to avoid many of the browser quirks and bugs by how you approach your CSS. There’s plenty of documentation around the Web about it, but here are a few:

  • CSS Crib Sheet? posted November 19, 2003, by Dave Shea via his website, mezzoblue.com. Be sure to review the comments for that post, too, as it’s an interesting discussion.
  • CSS Problem-Solving posted March 3, 2004, also by Dave Shea, which is somewhat of a follow-up to the above.

If you’re creating your own site that you can monitor and change as new browser versions come out, you might not need to be as conservative, but if you complete a site for a client and you sign off on the project, it might be better to avoid hacks and workarounds, or at least keep them in a separate style sheet that can be more easily removed once they’re not needed.

Hacks and workarounds today may cause problems later. The next version of Internet Explorer is on the horizon, and other browsers will continue putting out new versions, too. The approach I’m really talking about here is coined “Progressive Enhancement.” See Steve Champeon’s article via Webmonkey: Progressive Enhancement and the Future of Web Design.

See also Integrated Web Design: Strategies for Long-Term CSS Hack Management, by Molly Holzschlag for Informit.com, June 24, 2004.

Checking Visual Layouts via Online Screenshot Services

As I researched IE5 Mac info and testing, I learned of and tried some free Mac screenshot services online, including the following:

  • BrowserShots, a free screen capture service for several Macintosh browsers at 800x600 and 1024x768: Firefox 1.0.4, Safari 2.0, MSIE 6.0, Opera 7.54. As I write this post, there’s a 12-day turnaround time for screenshots, as there are lots in line ahead of you.
  • iCapture, free screen captures with Safari for Mac.
  • lixlpixel Screen Capture, free screenshots with these Mac browsers: Safari 2.0, Internet Explorer 5.2.3, Mozilla 1.7.7. The screenshot results are immediate, too—no waiting.

In addition, I also use BrowserCam, which is a commercial service:

  • BrowserCam is a fabulous service that I wholeheartedly recommend. While the free services above are free, they only do the top of the page with a limited number of browsers. If you use an anchor within your page, though, such as #footer and input your URL with the anchor, such as http://website.com/pagename.html#footer, you’ll get that part of the page. (Thanks to lixlpixel for that tip!) BrowserCam will take screenshots that cover the entire page based on page scroll increments, which is how I identified the footer navigation problems shown above, for example. In addition, BrowserCam includes quite a few browsers on multiple platforms.

Getting Ready for Launch

My business site’s redesign is now almost ready to go. I’m in the midst of editing and updating all the content. I’ll do a final test of the entire site with CSE HTML Validator’s batch processing feature that checks for W3C validation, spelling, and links (really handy!). I’m planning to have it online live within a few days.

Ah, Web Standards!

Well, the hurdles I’ve had to jump over for this one redesign are another example of why Web standards matter. While the above may sound like a lot to figure out, the above is nothing compared to the version 3 and 4 browser days and the lack of even decent browser support for W3C Recommendations. At the same time, designers and developers like myself also wish standards support could be a lot better than it is now. We have to keep after 'em and continue to push for it.

Interestingly, most mainstream users don’t even think about standards. They just want to visit a website and do whatever it is they came to do there. That’s how it ought to be, too.

Users shouldn’t have to think about standards at all, in my opinion. Standards should live quietly in the background helping to make everything work smoothly regardless of the browser or platform. In an ideal world, we designers and developers wouldn’t have to deal with all these browser bugs, either.

Courtesy,

www.brainstormsandraves.com

Wednesday, August 29, 2007

Which web page elements lead to high Google rankings?

The German company Sistrix analyzed the web page elements of top ranked pages in Google to find out which elements lead to high Google rankings. They analyzed 10,000 random keywords, and for every keyword, they analyzed the top 100 Google search results.

Which web page elements lead to high Google rankings?

Sistrix analyzed the influence of the following web page elements: web page title, web page body, headline tags, bold and strong tags, image file names, images alt text, domain name, path, parameters, file size, inbound links and PageRank.

  • Keywords in the title tag seem to be important for high rankings on Google. It is also important that the targeted keywords are mentioned in the body tag, although the title tag seems to be more important.

  • Keywords in H2-H6 headline tags seem to have an influence on the rankings while keywords in H1 headline tags don't seem to have an effect.

  • Using keywords in bold or strong tags seems to have a slight effect on the top rankings. Web pages that used the keywords in image file names often had higher rankings. The same seems to be true for keywords in image alt attributes.

  • Websites that use the targeted keyword in the domain name often had high rankings. It might be that these sites get many inbound links with the domain name as the link text.

  • Keywords in the file path don't seem to have a positive effect on the Google rankings of the analyzed web sites. Web pages that use very few parameters in the URL (?id=123, etc.) or no parameters at all tend to get higher rankings than URLs that contain many parameters.

  • The file size doesn't seem to influence the ranking of a web page on Google although smaller sites tend to have slightly higher rankings.

  • It's no surprise that the number of inbound links and the PageRank had a large influence on the page rankings on Google. The top result on Google has usually about four times as many links as result number 11.

What Google likes in a website

Google, the Internet's most important search engine now has a lot of competition with Yahoo's new search engine and with the upcoming MSN search. Still, Google is very important search engine to understand as it has the power to bring a tremendous amount of traffic to your website.

This article provides you practical tips & know how’s to improve your Google rankings.

• Make a site with a clear hierarchy and text links. Every page should be reachable from at least one static text link.
• Offer a site map to your users with links that point to the important parts of your site. If the site map is larger than 100 or so links, you may want to break the site map into separate pages.
• Create a useful, information-rich site and write pages that clearly and accurately describe your content.
• Think about the words users would type to find your pages, and make sure that your site actually includes those words within it.
• Try to use text instead of images to display important names, content, or links. The Google crawler doesn't recognize text contained in images.
• Make sure that your TITLE and ALT tags are descriptive and accurate.
• Check for broken links and correct HTML.
• If you decide to use dynamic pages (i.e., the URL contains a '?' character), be aware that not every search engine spider crawls dynamic pages as well as static pages. It helps to keep the parameters short and the number of them small.
• Keep the links on a given page to a reasonable number (fewer than 100).
• Content: Google searches the content on your site to return relevant search results. Be sure to include relevant keywords in the text of all your pages. Also, try to keep them near the top of your pages - Google may not crawl all the way down your page.
• Domain Name: Having your keywords in your domain name may boost your ranking. Google seems to favor sites with keywords in their domain.

Those are few basics to Google rankings. Integrate some of these suggestions into your web page, wait for Google to update in next few weeks to see the difference in your rankings.. Verify what works and what doesn't and constantly improve your site based on what you ascertain.

CSS Drop Shadows II: Fuzzy Shadows

We like shadows. We enjoy making them drop and we love CSS and standards, so we wrote CSS Drop Shadows. The little voice in our head approved of it. We thought that was the end of it.

We thought wrong.

The internet being the kind of medium it is, minutes after the publication of the article, we started receiving comments, queries and suggestions for improvements. Most notable among the latter was Phil Baines’ method for keeping the markup simple when dealing with paragraph drop shadows. We are indebted to him.

The most complained-about shortcoming of the technique turned out to be the sharp top and left edges of the shadow, which, although generally acceptable, are unlike what an image editing program would produce (a fuzzy shadow). Given that the shadow image is effectively clipped at those points, we felt this was an unavoidable inconvenience, chiefly due to Internet Explorer’s inability to display PNG’s transparency natively.

A cat's nose
A cat's nose

blowup of shadow detail

Then Jan pointed out a technique for making Internet Explorer render PNG’s alpha channel correctly. It works by activating Explorer’s AlphaImageLoader filter (previously discussed in this ALA article), but does so in an inobtrusive way which requires no extra javascript code. We think it’s a godsend. Combining this technique, some image trickery and our “fake shadow offset” method, we’ll be able to make properly fuzzy shadows that work across browsers.

In this article we’ll learn how to:

  • Hide a stylesheet from non-IE browsers so it doesn’t affect document validation.
  • Coerce IE5.5/IE6 into displaying PNG transparency correctly.
  • Use the above to create fuzzy shadow edges for our Drop Shadow effect.

First, we’ll fabricate our fuzzy shadow edge. To do this, we must create an inverse shadow in our image editing program. Usually we’d use a black shadow over a background color. For this effect, we’ll need a colored shadow. It must be the same color as the background over which we’ll apply the effect.

Start with an image like the “fake shadow offset” we described in the previous article. This one should be thinner than before (about 3px thickness for a 6px shadow has worked out well for us). Our examples will use white as background color. When reproducing this technique, adjust for yours.

We’ll apply a “Drop Shadow” effect to this image, taking care to specify white for the shadow color. A strong shadow is desirable — the stronger it is, the faster your shadow will seem to fade. We should now have something that looks like this:

Fake Shadow offset image

Save this image as a PNG with full transparency. We’ll use this file for IE5.5, IE6 and standards compliant browsers. Make a regular version sans shadow with thicker offset (as seen in the previous article) and save that as a GIF file. We’ll feed this one to IE 5 (which does not support the AlphaImageLoader filter). Here are sample files for your perusal: PNG/GIF (Check them on an image editing program, since they will look like white on white in your browser).

Since we now have a solid color at the edge of our offset, we’ve effectively given up on the possibility of having a transparent shadow, so we’ll use a simple GIF for it. Make sure you apply the effect over the background color you’ll use. Here’s our example shadow: GIF.

The markup for this effect will be two

’s around our image/block element.



just a test


The basic technique is still the same: We’ll set up the fake offset (with its inverse shadow) as background of the innermost

, and the shadow as background of the outermost one. When overlapped, the transparency of the PNG will seem to gradually dissolve the shadow image until it becomes the solid background color. The tricky part is making this work in Explorer.

Illustrated process

Our CSS is pretty much what we had seen in the previous article:

.alpha-shadow {
float: left;
background: url(img/shadow1.gif) »
no-repeat bottom right;
margin: 10px 0 0 10px !important;
margin: 10px 0 0 5px;
}

.alpha-shadow div {
background: url(img/shadow2.png) »
no-repeat left top !important;
background: url(img/shadow2.gif) »
no-repeat left top;
padding: 0px 5px 5px 0px;
}

.alpha-shadow img {
background-color: #fff;
border: 1px solid #a9a9a9;

padding: 4px;
}

If you look closely you’ll notice we’re still including the non-fuzzy GIF offset (shadow2.gif) as background of the inner

. This is for the benefit of Internet Explorer 5.0, which doesn’t support the AlphaImageLoader filter. As it stands, this code will apply to all versions of Explorer. To make adjustments for IE 5.5/6, we’ll create an extra CSS file.

ie.css

To activate the AlphaImageLoader filter in a simple and reliable way, we’ll first include it in its own CSS file and name it ie.css. We know this is shameful and will probably make the Standards Squad put a price on our head, but we’ll hide this file from other browsers later, so it’s ok. Kind of.

Our ie.css stylesheet will look like this:

.alpha-shadow div {
filter:progid:DXImageTransform.Microsoft»
.AlphaImageLoader(src='img/shadow2.png', »
sizingMethod='crop');
background: none;
}

The AlphaImageLoader filter supports two sizing methods: crop and scale. We’ll use crop for our offset (scale fits the full image into the block, and is not what we’re looking for). Since the filter is somewhat limited and does not support CSS-like image positioning, we’re stuck with shadows that drop down and to the right (the image on its default position is all the way to the left and top).

We should note that, since the filter places the image in the foreground of the block element rather than as its background, this technique could be set up to show fuzzy shadows in Explorer with only one

surrounding the image, and show the hard edge shadow for other browsers. Not being ones to reward bad browser behavior, we’ll stick to the technique with the extra
, which gives us a fuzzy shadow in almost every browser under the sun.

The second line, where we set the

’s background to none, is there in order to remove the GIF offset we specified in the CSS before. Since we'll only feed this file to IE5.5 and IE6, IE5 keeps the GIF offset (and thus displays a hard edge shadow). The rest of the browsers ignore that GIF file by the !important method we specified in the previous article.

Conditional Comments

To hide the ie.css stylesheet from all browsers that don’t need it, we’ll use Conditional Comments, a Microsoft provided technique to serve content to specific versions of Internet Explorer. They are included in the html document and look like standard html comments, so browsers other than IE5+ ignore them (and so does the w3c Validator, which is convenient). We’ll insert this in the of our document, after the CSS for the drop shadow:


What that does is specify that the enclosed bit of code should be used by versions Greater Than or Equal (the gte part) to Internet Explorer 5.5 (it must be specified as 5.5000 because of Version Vectors), thus feeding IE5.5 and IE6 the special stylesheet.

That completes the technique. This may seem overly complicated just to achieve a fuzzy shadow, but then again, they say that God is in the details. As a plus, the mentioned techniques can be used to achieve all sorts of different effects.

Here, have a cat:

Cat on floor


CSS Drop Shadows

They’re the corkscrew in every graphic designer’s Swiss Army knife. Much used, oft maligned but always popular, drop shadows are a staple of graphic design. Although easy to accomplish with image-editing software, they’re not of much use in the fast-changing world of web design. On the web, adaptability and ease of use dictate trends — and static images with a fixed background effect are not very adaptable.

But what if we had a technique to build flexible CSS drop shadows that can be applied to arbitrary block elements? That can expand as the content of the block changes shape? Compatible with most modern browsers? With better results for standards-compliant browsers? If you’re not sold yet, we can also tell you that it requires minimal markup.

Interested? Well, first off, we wouldn’t want to take credit for something we didn’t invent, but merely improved upon. This particular technique was conceived and demonstrated by Dunstan Orchard, of 1976 design fame (hats off to you, Dunstan). We found it was easy, intuitive, and worked like a charm. However, after closer examination, we saw room for improvement and set to work on it.

Here’s how it works: you need to make a drop shadow image in the image editor of your choice. It should be only the shadow, without a visible border (an easy way to do this is by applying the effect to an empty selection). Make sure your image is big enough to cover the maximum expected size of the block elements that will use it. In practice, we’ve found that 800 x 800 is a respectable enough size. Save it as a GIF, making sure you use the color of the background you’ll apply the effect over. Additionally, save the same shadow with full alpha transparency (no background color) as PNG. This will be used to feed a better shadow to browsers capable of displaying it. These are some sample files: GIF file/PNG file.

test

We’ll start by giving a shadow to an image and then move on to other block elements. In a moment of ingenuity, we decided to name our class “img-shadow”. Our test subject shall be this cute cat:

And its corresponding markup (one div is the only extra markup we’ll need):


test

The following illustration shows how the technique works:

Technique illustration

First, our previously prepared shadow file will be set as background for the div.

background: url(shadow.gif) no-repeat bottom right;

Then we’ll give the image negative top and left margins to make the “drop” that gives us the shadow. Our shadow is six pixels wide, so that’s our magic value.

margin: -6px 6px 6px -6px;

We float the div to avoid having to specify its size (otherwise it will take up all available horizontal space).

Remember we said that we’d provide better shadows for better browsers? This line will do the trick:

background: url(shadowAlpha.png) no-repeat right bottom !important;

That “!important” bit tells the browser that the declaration is to take precedence over normal declarations for the same element (see the spec). It also happens to be unsupported in all versions of Internet Explorer, which also lack native support for transparent PNG’s. It’s almost too convenient. By specifying controversial declarations twice, we get the desired behavior (IE takes the second one, most other browsers the first one). The end result is that, were the background color to change, browsers that support PNG would maintain a perfectly transparent shadow. Sadly, Explorer’s shadow will stay with its original background color.

But why do this you ask? The reasons are twofold:

  • We can: This is a painless, effortless and automatic hack that yields great results in the browsers that support it.
  • It may fix itself: If the new version of Internet Explorer (shipping with Longhorn) supports both of these standards, we won’t have to fix a thing to get pixel-perfect, truly transparent shadows in it.

The finished CSS code looks like this:

.img-shadow {
float:left;
background: url(shadowAlpha.png) no-repeat bottom right !important;
background: url(shadow.gif) no-repeat bottom right;
margin: 10px 0 0 10px !important;
margin: 10px 0 0 5px;
}

.img-shadow img {
display: block;
position: relative;
background-color: #fff;
border: 1px solid #a9a9a9;
margin: -6px 6px 6px -6px;
padding: 4px;
}

Differences in margin size account for IE’s box model, and that last padding value gives us a nice frame around the image. Sadly, it is lost in IE 5.5 and 5.0. The drop shadow effect stays, though.

Our shadow will blend seamlessly with its background in standards-compliant browsers. In Explorer, the shadow will clash with the background unless you’ve stuck with the background color you used for your shadow. You can see the results here:

A cute cat
A cute cat

For the next part, we’ll apply the drop shadow effect to a paragraph.

Logic dictates that the same technique should yield similar results when working with a paragraph, which can be treated as another block element. And indeed, with most browsers, it works like a charm. Care to guess which one doesn’t get it right?

While developing this technique, we found that when working with a block element other than an image, in bold defiance of common sense, Explorer decided to clip the left and top parts of the block — the ones that “jump” out of the shadow — regardless of what we tried. Amusingly enough, the only version of Explorer that gets this right is 5.0. No amount of hacks, overflow settings, or gentle suggestions seemed to help (and yes, righteous cursing was tried). We gave up and decided that a different approach was called for.

The method we came up with is partly based on Douglas Bowman’s Sliding Doors methodology, and calls for an extra bit of markup (another div), so our paragraph will look like this:



The rain in Spain ...




Instead of giving the paragraph negative top and left margins, we’ll give it positive right and bottom padding. This will expose the shadow (set as background for the outermost div). Then we’ll fake the shadow offset by using a partly transparent GIF as background for the inner div, which will overlap the shadow. Make sure that the visible part of this image is the same color as the background over which you use the drop shadow effect. Name the image “shadow2.gif”. It should be constructed as follows:

Fake offset image example

Here’s an example GIF file (this image will most likely look as white on white on your browser, so you may want to save it and take a look at it in your image editing program).

This illustration shows what we’re going to do:

Paragraph technique illustration

The following are the styles needed to accomplish the effect. Notice that the extraneous image and padding are used only by Internet Explorer. Most other browers effectively ignore the inner div, and stick with the method we used for the drop shadow of the image.

.p-shadow {
width: 90%;
float:left;
background: url(shadowAlpha.png) no-repeat bottom right !important;
background: url(shadow.gif) no-repeat bottom right;
margin: 10px 0 0 10px !important;
margin: 10px 0 0 5px;
}

.p-shadow div {
background: none !important;
background: url(shadow2.gif) no-repeat left top;
padding: 0 !important;
padding: 0 6px 6px 0;
}

.p-shadow p {
color: #777;
background-color: #fff;
font: italic 1em georgia, serif;
border: 1px solid #a9a9a9;
padding: 4px;
margin: -6px 6px 6px -6px !important;
margin: 0;
}

The same considerations for background color mentioned in the image example apply for paragraphs. Here’s the end result. (Try resizing the text on your browser to see the box change size and watch the shadow adjust.)

The rain in Spain falls mainly on the plain.



Additional notes

In this article, the styles for image and paragraph have been broken up for clarity, but both could be specified in one fell swoop with minor adjustments.

This technique has been tested with Gecko-based browsers, Safari, Opera and IE 5.0+. Apart from the differences noted, no problems were observed. It should work well with most of the stuff out there (no, not Netscape 4.x).

About the Author

 Sergio Villarreal Sergio Villarreal lives in México but spends most of the time in his head. He maintains a weblog and rarely updated webcomic at Overcaffeinated.net and makes a point of learning a new trick every day. Some are even useful.

Tuesday, August 28, 2007

php/Java bridge

What is php/Java bridge?

The php/Java bridge is an optimized, XML-based network protocol, which can be used to connect a native script engine, PHP, with a Java or ECMA 335 virtual machine. It is more than 50 times faster than local RPC via SOAP, requires less resources on the web-server side, and it is faster and more reliable than communication via the Java Native Interface.

What can I do with the php/Java bridge?

The php/Java bridge allows you to quickly access java classes from within your PHP scripts without having to know Java. It also allows you to access PHP scripts from within your Java classes without having to know PHP.

Because of this two-way flexibility, you can access hundreds of pre-built Java classes from your PHP scripts, and hundreds of pre-built PHP scripts from your Java classes opening up your applications to greater flexibility and enhanced functionality.

How it works

The php java extension and the pure PHP PHP/Java Bridge implementation use this protocol to connect running PHP instances with already running Java or .NET back ends. The communication works in both directions, the JSR 223 interface can be used to connect to a running PHP server (Apache/IIS, FastCGI, ...) so that Java components can call PHP instances and PHP scripts can invoke CLR (e.g. VB.NET, C#, COM) or Java (e.g. Java, KAWA, JRuby) based applications or transfer control back to the environment where the request came from. The bridge can be set up to automatically start the PHP front-end or the Java/.NET back end, if needed.

Each request-handling PHP process of a multi-process HTTP server communicates with a corresponding thread spawned by the VM. Requests from more than one HTTP server may either be routed to an application server running the PHP/Java Bridge back end or each HTTP server may own a PHP/Java Bridge back end and communicate with a J2EE Java application server by exchanging Java value objects; the necessary client-stub classes (e.g.: SOAP stubs or EJB client .jar files) can be loaded at run-time.

ECMA 335 based classes can be accessed if at least one back end is running inside a ECMA compliant VM, for example Novell's MONO or Microsoft's .NET. Special features such as varargs, reflection or assembly loading are also supported.

When the back end is running in a J2EE environment, session sharing between PHP and JSP is always possible. Clustering and load balancing is available if the J2EE environment supports these features.

The PHP/Java Bridge does not use the Java Native Interface ("JNI"). PHP instances are allocated from the HTTP (Apache/IIS) pool, instances of Java/J2EE components are allocated from the back end. The allocated instances communicate using a "continuation passing style", see java_closure() and the invocable interface. In case a PHP instance crashes, it will not take down the Java application server or servlet engine.

Designing Your Site for Web 2.0

Have you heard it? There's a buzz like never before on the Internet. Everyone is talking about Web 2.0. If you're like many people, you may think it's a marketing gimmick and quite an overused statement. If so, you would be at least partially right.
Fortunately, there's another side to the story. Underneath all of the chatter is a concept that is even more powerful than the hype that surrounds it.
The concept of Web 2.0 started as a conference brainstorming session between O'Reilly and MediaLive International. During their discussion, they analyzed the companies that had survived the dot-com collapse. Interestingly enough, many of these companies had quite a few things in common. Was there a connection? Was the dot-com crash a turning point for the web? O'Reilly and MediaLive believed so. And therefore, Web 2.0 was born.
So, what is it?
Wikipedia defines Web 2.0 as:
"The term Web 2.0 refers to a second generation of services available on the World Wide Web that lets people collaborate and share information online. In contrast to the first generation, Web 2.0 gives users an experience closer to desktop applications than the traditional static Web pages. Web 2.0 applications often use a combination of techniques devised in the late 1990s, including public web service APIs (dating from 1998), Ajax (1998), and web syndication (1997). They often allow for mass publishing (web-based social software). The concept may include blogs and wikis."
There is no official standard for what makes something "Web 2.0", but there are certainly a few common attributes that often describe this new culture of transformation.
You can see many of these concepts in sites like Flickr, del.icious, Wikipedia, Amazon reviews, and the eBay reputation system.
Web 2.0 is built on a system of collective knowledge. It provides a social fabric for the Web, empowering the individual and giving them an outlet for their voice to be heard.
However, we have only seen a small glimpse of the effects of these new transitions. Del.icio.us and Digg are just the beginning of what will soon become a much more interactive Web.
Each day there are a variety of new online applications being released: online spreadsheets, online word processing, to-do lists, reminder services, and personal start pages.
In addition, many of the changes that are evident in the world of Web 2.0 can be seen through common design practices. Old-school HTML was full of boxes and square tables. Today's web designers are rapidly moving away from boxy designs to flexible curves. When designing for today's Internet, it's all about rounded designs, nice big text, gradients, glassy effects, and bright colors.
Rounded Corners:
Let's face it. The days of good 'ol tables and square boxes are good and gone. The Web 2.0 era has ushered in the pleasing sight of rounded corners.
Unfortunately, many web masters have spent unending hours trying to obtain perfectly rounded corners. Their pain and suffering has led to a number of tutorials that will help us bypass the grief.
Below are some links to tutorials that will get you started creating your very own rounded corners:
http://www.webcredible.co.uk
http://www.alistapart.com
http://www.web-20-workgroup1-swicki.eurekster.com
Nice Big Text:
Have you ever been to a web site where you could barely read the text? Well, join the club. Fortunately, times have taken a turn for the better. With Web 2.0, oversized fonts have come into style. You can start using plenty of oversized text to make important messages stand out. Of course, you don't want all of the text on your web site to be supersized, but make sure that the most important text on the page is bigger than normal text.
You can see some examples at:
http://www.corkd.com/http://www.blurb.com
Gradients:
Gradients are another popular design element of Web 2.0. This is especially true of backgrounds. A common background used today has a gradient at the top, fading down to some other color that continues throughout the background for the rest of the page.
For a complete tutorial on how to create this type of effect, go to http://www.photoshoplab.com.
Colors:
Web 2.0 sites are strongly defined by their colors. They nearly always use bright and cheery colors - lots of blue, orange, and lime green.
They also often include large, colorful icons, sometimes with reflections and drop shadows. To see some samples of how web sites are effectively using bright colors, check out:
http://www.9rules.comhttp://www.iconbuffet.comhttp://www.linkedin.com
Other common design characteristics include the use of tabs, reflections, glassy effects, large buttons, and big text boxes for submission forms.
Sites that are embracing Web 2.0 can also often be identified by their tag clouds. If you have traveled the web much in the last 6 months, then you have surely seen tag clouds. They are used prominently on del.icio.us, Technorati, and Flickr. A tag cloud is basically a visual depiction of the conent on a website. Often times, more popular tags are shown in a larger font.
Why not add a tag cloud to your own site? Not only do they look cool, but they also provide your visitor with a search tool that helps them to find your content quickly and easily. You can create your own tag cloud with a very simple service called Eurekster Swicki. This is a community-based search engine that creates free tag clouds for web sites.
Although we have discussed many of the design elements associated with Web 2.0, this change is much more than just an aesthetic transition. Web 2.0 is essentially about a transition in the way we experience the Internet. The new Ajax programming base allows web masters to create an architecture of participation for their users. Web 2.0 refers to the ongoing transition to full participation on the Web.
Your web site can be so much more than an information resource. Your web presence is a place. With the proper programming skills, you can create a virtual world complete with an online shopping mall that compares prices from a variety of merchants, looks for potential coupons, and displays Amazon reviews.
In addition, traditional desktop applications are rapidly becoming available online as a service. Why not offer your visitors the ability to create their own to-do lists, online note pads, reminder services, and personal start pages?
Create an experience, not just a site.

About The AuthorKim Roach is a staff writer and editor for the SiteProNews and SEO-News newsletters. You can contact Kim at: kim @ seo-news.com

Monday, August 27, 2007

Suicide in Cyberspace -Your Outward Links Can Kill Your Rankings

Link building strategies have, for most people for a long time, revolved around reciprocal link exchanges. Whilst most people understand that links are important, they generally don't understand why this is so. In a nutshell, a link to your site has traditionally been accepted by Search Engines as a vote for your site. A link from a topic or theme-related site to yours is better than a link from a site having a completely different topic. An important site's link to yours carries more weight - for example from The Open Directory, or Yahoo Directory. All pretty straightforward...

BUT... the rules have changed... significantly! All the thinking webmasters worked diligently to build links - willy-nilly - in order to subvert the search engine rankings and gain an advantage to themselves at the expense of everyone else. For a long time, there have been mutterings about this, and comments from Google staffers about possible penalties from linking to "bad neighbourhoods" and - heaven forbids it - buying links! Google et al simply don't approve of willy-nilly link-building schemes, and have recently tightened the screws a bit more, in two notable ways...

Bad Links
Some links are bad... for example, if you are a car sales company and you've got dozens of completely irrelevant links to international hotel sites... yeah, YOU know the ones! in Prague, Munich, Shanghai etc! That's a BAD neighbourhood over there! That IS going to put a world of hurt on you! And as for the Free For All link sites, web rings, and 3 way link schemes... that's just suicide in cyberspace! Why? Coz its a blatant and completely indefensible attempt at cheating the system!

Reciprocal Links - Almost a Waste of Effort
Reciprocal links are still of some value, providing the link titles are explicit, and if the page they link to you from has a higher Page Rank than the page from which you link to them. The concept of a link to you being a vote for you, and being added to your site's Total Vote Count has a flip side. A link from you to someone else essentially deducts one vote from your total vote count... meaning its value is minimal when compared to a 1-way incoming back-link!

1-way Outward Links Are Toxic
Ok, lets assume you are a service provider, maybe a health clinic, and you deal with hospitals, other doctors, specialists, nurses, laboratories. So, as a benefit to your visitors, you place direct links to their web resources on your links page. Is that clever?

Most certainly it is NOT! Transfusion time, because you'll be haemorrhaging Page Rank with nothing in return! Do it, but be smart about it, because there is NOTHING to be gained (by you) from linking to any site that does not link back. So make sure your links include the "nofollow" attribute that tells SE's that the link is NOT a vote by your site for that site!

Link Content Is Mission Critical
This is mission critical because Google and other have decided that they can't trust you to be honest about your site! Basically, it seems like there are two web tribes - those who know not so much about how things work, and those who know more than they should. There should also be a flourishing third tribe, who just build great sites with lots of terrific content that automatically ranks highly - but nobody's seen nuthin' from those guys for ages!

The tribe who know more than they should ruthlessly manipulate every available loophole to dominate search engine rankings, at the expense of those who have yet to read SEO For Dummies. Therefore, Google decided that its essential that there is some external correlation between what YOU say your site is about, and what OTHER people say your site is about... This is done by analysing the words in the Link Title on all links pointing to your site. Bottom line here is - if a keyword phrases does NOT appear on links to your site, you ain't gonna rank for that phrase!

For many established sites, this is the main reason they might have experienced a noticeable decline in rankings in the last few months. Most older sites will have a majority of incoming links based on their business name, and NOT on their activities / products / services / location etc. To use the common "widgets" analogy - if you are selling "widgets" and all your incoming link titles have your only business name e.g. Smiths Manufacturing Co Ltd, its now very difficult for you to rank for "widgets"!

Backlink analysis reveals this shortcoming rather quickly and, lucky for you, it is possible to remedy this by building 1-way incoming back-links using multiple Title / Description combinations that contain a good spread of relevant keywords. It does require some keyword research, and it is tedious - but if you don't do it, you are certainly not going forwards! But your competitors might be...

How to Build a Better Website Without Building a Website

The most important thing to consider, when first thinking about any website, is the user. Like so much marketing, websites are, unfortunately, too often developed 'inside out' (company focused) rather than 'outside in' (customer focused).

All website users have their own reasons and objectives for visiting a site. No matter how targeted, any website has to communicate with a wide range of individual users.

To be successful, therefore, every site has to give each and every user a thorough but simple presentation of the site's content so that the site achieves your objectives e.g. registrations, leads, sales.

To do this successfully, users want:

Simple Navigation

Navigation that is clear and consistent.

Probably the worst issue is 'lost visitors' – those who are in a maze and don't know where they are in the site.

The site should always allow users to easily return to the home page and preferably get to any page with one clíck.

Studies have shown that users want to find things fast, and this means that they prefer menus with intuitive ranking, organization and multiple choices to many layers of simplified menus. The menu links should be placed in a consistent position on every page.

Clarity

Users do not appreciate an over-designed site.

A website should be consistent and predictable. For maximum clarity, your site design should be built on a consistent pattern of modular units that all share the same basic layout, graphics etc.

Designing Websites That Meet Their Objectives

Everything above is pretty simple, but how do you ensure that you can achieve it?

The answer is website architecture – an approach to the design and content that brings together not just design and hostíng but all aspects of function, design, technical solutions and, most importantly, usability.

The distinction may seem academic but imagine trying to publish a magazine using just graphic design and printing whilst ignoring content and editing. It just would not work yet that's what too many people still try to do.

Website Architecture

Defining a website using web architecture requires:

  • Site maps
  • Flow charts
  • Wireframes
  • Storyboards
  • Templates
  • Style guide
  • Prototypes

This planning saves you (the client) money. The better the site map, flow chart, wireframe, storyboard, templates, style guide and prototype the more time and money you save because it gives the designer who has to do the graphics and the developer who has to do the programming a blueprint.

We are constantly amazed that people who wouldn't think about building a house, car, ship or whatever will still build a website without an architectural plan.

The benefits include:

  • Meeting business goals
  • Improved usability
  • Reducing unnecessary features
  • Faster delivery

Site Maps

Many people are familiar with site maps on web sites which are generally a cluster of links.

An architectural site map is more of a visual model (blueprint) of the pages of a web site.

The representation helps everyone to understand what the site is about and the links required as well as the different page templates that will be needed.

Flow Charts

A flowchart is another pictorial or visual representation to help visualize the content and find flaws in the process from say merchandise selection to final payment.

It's a pictorial summary that shows with symbols and words the steps, sequence, and relationship of the various operations involved and how they are linked so that the flow of visitors and information through the site is optimized.

Wireframes

Wireframes take their name from the skeletal wire structures that underlie a sculpture. Without this foundation, there is no support for the fleshing-out that creates the finished piece.

Wireframes are a basic visual guide to suggest the layout and placement of fundamental design elements on any page. A wireframe shows every clíck through possibility on your site. It's a "text only" model to allow for the development of variations before any expensive graphic design and programming, but one that also helps to maintain design consistency throughout the site.

Creating wireframes allows everyone on the client and developer side to see the site and whether it's 'right' or needs changes without expensive programming. The goal of a wireframe is to ensure your visitors' needs will be met in the website. If you meet their needs, you will meet your objectives.

To create a wireframe requires dialogue. You and your developers talk, to translate your business successfully into a website. Nobody knows your business better than you and your developers should listen to ensure the resulting wireframe accurately represents your business. You, however, must answer the questíons; questíons such as:

  • What does a visitor do at this point?
  • Where can a visitor go from here?

and ignore questíons about what your visitor sees at this point. Sounds easy, but!

Storyboards

Storyboards were first used by Walt Disney to produce cartoons. A storyboard is a "comic" produced to help everyone visualize the scenes and find potential problems before they occur. When creating a film, a storyboard provides a visual layout of events as they are to be seen through the camera. In the case of a website, it is the layout and sequence in which the user or viewer sees the content or information.

However, the wireframe provides the outline for your storyboard. Developers and designers don't need to work in a vacuum - the wireframe guides every design, information architecture, navigation, usability and content consideration. Wireframes define "what is there" while the storyboards define "how it looks".

Templates and Style Guide

Templates are standard layouts containing basic details of a page type that separates the business (follow the $) logic from the presentation (graphics etc) logic so that there can be maximum flexibility in presentation while disrupting the underlying business infrastructure as little as possible.

Style guides document the design requirements for a site. They define font classes and other design conventions (line spacing, font sizes, underlining, bullet types etc.) to be followed in the Cascading Style Sheets (CSS) used to provide a library of styles that are used in the various page types in a web site.

Prototypes

A prototype is working model that is not yet finished. It demonstrates the major technical, design, and content features of the site.

A prototype does not have the same testing and documentation as the final product, but allows client and developers to make sure, once again, that the final product works in the way that is wanted and meets the business objectives.

Once you have built your virtual site, it's a lot quicker, easier and cheaper to build the real one.


About The Author
Richard Hill is a director of E-CRM Solutions and has spent many years in senior direct and interactive marketing roles. E-CRM provides EBusiness, ECommerce and Emarketing and ECRM.
http://www.e-crm.co.uk/profile/message170807.html

Monday, June 11, 2007

Pencil Sketch Tutorial

Take any photo or artwork and create a pencil sketch as detailed or shaded as you want with a few simple steps.

If you're going to print the sketch out on an inkjet printer, create a new file the size of your print at 300 ppi with a white background. Create a new layer and import or copy your photo on this layer. Resize the photo to fit into the area of the background size. Elements will automatically adjust the resolution of the photo to print quality.

Change the picture to black and white 'Image | Mode | Greyscale', then duplicate it or drag and drop it into the duplicate layer icon on the layers palette. Then, invert the image to a negative, 'Image | Adjustments | Invert' or just hit ctrl + I.

Now select 'color dodge' on the layer palette under the layer blend mode. Then use 'Filter | Blur | gaussian blur' which will bring up a settings box. Move the slider to a radius of about 5 to see the sketch appear.

Adjust the radius for the quality of the sketch that you want. The higher settings will give you a more polished blended sketch look and the lower settings will give you fewer shades for a quick or rough sketch look.

Good Web Design Feng Shui

Feng Shui simply stated is the harmonious flow of energy. Pronounced "foong shway", The basic principle of Feng Shui is simplicity. Removing everything that stops the natural flow of chi (life force, energy, particles) into our lives to produce more harmony and balance.

The principles of Feng Shui can be applied to all areas of life including relationships, business, web sites, personal finances, and even your closet since there's nothing hidden from the natural flow of energy. You can slow it down, attract or repel it, but you can't stop it. It's not magic and in spite of what you may have read about Feng Shui, you probably won't win the lottery just by placing a three legged frog with a coin in it's mouth near your front door. In fact, by diving right in to Feng Shui without a good understanding of how chi works, you could actually increase the amount of negative energy that you want to dispel. What do I mean by that?

The universal law of attraction states that 'like attracts like', so if you have piles of clutter around you (or on your web site) or you are surrounded by things that no longer serve you or that you don't like, you will attract more of the same until you do something to change it. By removing things in your life (or on your web site) that serve no purpose, and only keeping those things that serve you well and that you absolutely love, you are going to attract more of the same. The flow of energy is both receptive and aggressive (Yin and Yang) and when this flow is out of balance, there is disharmony.

How does this apply to web design, you ask... good question. If you've ever been to a web site where the colors hurt your eyes, the music offends your ears or you have a frustrating experience trying to find what you're looking for, it's probably because the site does not have a good flow of information that is pleasing to your senses. Web designers call it 'user-centered design' and it's the way content is organized so the user can intuitively find it without having to fight their way through or to stop and think about it.

You can tell when a site is out of balance, probably in more than one area, and not just in the layout or the graphics, but in the simple, logical order that you come to expect from the internet by visiting sites with good standards in web design and information architecture.

Sites with good Feng Shui typically have:

1) the logo in the top left corner and it's usually linked to the home page or a home link is provided on all pages,

2) the primary navigation is across the top or down the left side of the pages. If buttons are used for primary navigation, text links are duplicated at the bottom of each page, not only for better search engine results but so the user doesn't have to scroll back up to the top of the page to continue,

3) there is a visual balance on most pages of curves and corners with a pleasing color scheme,

4) headings are larger than content text and information is concise for skimming the page while providing the user with an option for more information if desired,

5) animation and ads are not forced on the visitor but are offered as a choice,

6) font is resizable in the browser, alt text is provided on images and the site is usable for people with disabilities and/or older browsers,

7) fresh content is added on a regular basis... weeding out or archiving out of date information to add new or more up to date information. This not only prevents 'web clutter', but tells your users that you want their visit to be useful,

8) graphics have been compressed in byte size making them load quickly. Time is the main factor in web design. The whole purpose of database-driven web sites is to load the information quickly off the server instead of depending on slower browser version dependent rendering,

You can probably think of many more ways to improve the flow of energy on your web site. There is room for improvement on every site and the best way to find out is to ask your visitor's opinions of what they like. After all, if you didn't build your site for your user's experience, then why did you build it?

Tuesday, May 15, 2007

Introduction

The Web Design Industry is growing slowly into an organised sector, with a number of Web Design Firms, run by young entrepreneurs setting up shops all across Goa. The firms offer a plethora of services like web hosting, web design, web development, search engine optimisation, flash design and multimedia presentations.

As a team of well experienced professionals, Team Inertia Technologies offers its clients robust, aesthetic website design and development options that cater to their online business promotion needs.

Some of the technologies we work in are LAMP, that Linux, Apache, MySql, PHP. Microsoft Technologies like C#, ASP.NET, VB.NET, XML.

We are one of Goa's only IT Firms who provide payment gateways for its clients, along with customised integration to their websites.

We are also a popular outsourcing destination for medium sized IT Projects. Our clients are based in UK, US, Middle East.

Our team currently has 10 members with a future expansion plan on the anvil. Some of our clients include Goa University, Goa Legislative Assembly, Goa Institute of Management, Best Goa Deals, Cinderella's Ashirwad Matrimony, INS Mandovi, Sharada Mandir, GKB Optolabs etc.

For more information about us, log on to our website, www.teaminertia.com .