Wednesday 22 December 2010

Cross Domain communication using HTML5 postMessage

One of the cool new features in HTML 5 is Cross Document Messaging. What makes this feature really nice is that all the next-generation browsers support it: Internet Explorer 8, Firefox 3, Opera 9 etc. Facebook is already using this feature, for example, in order to support web-based instant messaging.

window.postMessage() is available to all windows (including the current window, popups, iframes, and frames) that allows you to send textual messages from your current window to any other - regardless of any cross-domain policies that might exist.

window.postMessage("string") method generates a message DOM event on the document of the receiving document. This event object contains the message as a property: event.data which the receiving document can use however they see fit.

The demo demonstrates how easy it is for two iframe of different origins to talk to each other.


window.document.onmousemove = function(e) {

var x = (window.Event) ? e.pageX : window.event.clientX;

var y = (window.Event) ? e.pageY : window.event.clientY;

// this send data to the second iframe of the current page

window.parent.frames[1].postMessage('x = ' + x + ' y = ' + y, '*');
};

var onmessage = function(e) {
var data = e.data;
var origin = e.origin;
document.getElementById('display').innerHTML = data;
};

if (typeof window.addEventListener != 'undefined') {
window.addEventListener('message', onmessage, false);
} else if (typeof window.attachEvent != 'undefined') {
window.attachEvent('onmessage', onmessage);
}


Security Issues

<div id="test">Send me a message!</div>
<script>
document.addEventListener("message", function(e){
document.getElementById("test").textContent =
e.domain + " said: " + e.data;}, false);
</script>

1. If you're expecting a message from a specific domain, set of domains, or even a specific url, please remember to verify the .domain or .uri properties as they come in, otherwise another page will be bound to spoof this event for malicious purposes.

2. Just because a string is coming in, as a message, doesn't mean that it's completely safe. Note that in the example, above, I inject the string using .textContent, this is intentional. If I were to inject it using .innerHTML, and the message contained a script tag, it would execute immediately upon injection. This is a critical point: You'll need to be sure to purify all your incoming messages before they are used and injected into the DOM.


Read more >>

Wednesday 8 December 2010

Configure the HTTP Expires Response Header (IIS 7)

1. Open IIS Manager and navigate to the level you want to manage.

2. In Features View, double-click HTTP Response Headers.

3. On the HTTP Response Headers page, in the Actions pane, click Set Common Headers.

4. In the Set Common HTTP Response Headers dialog box, select the Expire Web content check box and select one of the following options:
* Select Immediately if you want content to expire immediately after it is sent in a response.

* Select After if you want the content to expire periodically. Then, in the corresponding boxes, type an integer and select a time interval at which content expires. For example, type 1 and select Days if you want the content to expire daily.

* Select On (in Coordinated Universal Time (UTC)) if you want the content to expire on a specific day and at a specific time. Then, in the corresponding boxes, select a date and time at which the content expires.

5. Click OK.

Click here to read more.

Thursday 2 December 2010

How to find public key token for a .NET Framework DLL or assembly

For example if you are looking for public key token of System.Web.dll of .NET Framework 4 then go to the Config folder of the Framework (normally it is C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Config), open machine.config in a text editor and look for soemthing similar to

sectionGroup name="system.web"

If you find this line then look for "PublicKeyToken" property at the end of the same line.

Reading Querystring value using JavaScript

The following function can be used to read the value of given key from querystring.

/* function to read value of the given key
, x, from querystring */
function GetQStringVal(x) {

var a = location.search.substring(1);
var b = a.split("&");

for (var i = 0; i < b.length; i++) {

var c = b[i].split("=");
if (c[0].toLowerCase() == x.toLowerCase())
{
return c[1];
}
}
return "";
}

Using jQuery to modify QueryString

I recenlty worked on a requirement where I have to modify the querystring of all the links in a page before the actual request is made from the browser, i.e. when a link is clicked. There may be better ways to do this but this is how I implemented it using jQuery.


$(document).ready(function() {
$('a').click(function(event) {

var $a = $(event.target);

if ($a.is('a')) { //if the event is triggered by an <a> tag

//append refid querystring to href
if it is not an external url and it refers to
an ASPX page and does not already contain
refid in querystring

var appendrefid =
$a.attr("href").toLowerCase().indexOf("http://") == -1
&&
$a.attr("href").toLowerCase().indexOf("https://") == -1
&&
$a.attr("href").toLowerCase().indexOf(".aspx") != -1
&&
$a.attr("href").toLowerCase().indexOf("refid=") == -1;

if (appendrefid) {

event.preventDefault();
location.href = $a.attr("href") + "?refid=" + GetQStringVal("refid");
}
}
});
});

/* function to read value of the given key, x, from querystring */
function GetQStringVal(x) {

var a = location.search.substring(1); var b = a.split("&");

for (var i = 0; i < b.length; i++) {
var c = b[i].split("=");
if (c[0].toLowerCase() == x.toLowerCase()) { return c[1]; }
}

return "";
}


Here the main page, where the content is coming from the database, is always requested with refid in the querystring and the above script reads the refid from the original request and appends it to the newly requested page when the link is clicked.

Using the ScriptManager of Master page in content page

This can be done using ScriptManager.GetCurrent() method.

Use this static method to determine whether a ScriptManager control is on a page, or to access the properties and methods of a ScriptManager control when you do not know its ID.


ScriptManager scriptManager;

if (ScriptManager.GetCurrent(Page) == null)
{
scriptManager = new ScriptManager();
scriptManager .ID = "ScriptManager1";
}
else
{
scriptManager = ScriptManager.GetCurrent(Page);
}

Thursday 18 November 2010

JavaScript getTime() function

getTime() : Returns number of milliseconds since 1 January 1970

<script type="text/javascript">
alert((new Date()).getTime())
</script>

Click here for a full list of date and time functions.

jQuery event.preventDefault() cancels the default action of the event

If this method is called, the default action of the event will not be triggered.

For example, clicked anchors will not take the browser to a new URL. We can use event.isDefaultPrevented() to determine if this method has been called by an event handler that was triggered by this event.

<html>
<head>
&l;tscript src="http://code.jquery.com/jquery-1.4.4.js"></script>
</head>
<body>

<a href="http://jquery.com">default click action is prevented</a>
<div id="log"></div>

<script>
$("a").click(function(event) {
event.preventDefault();
$('<div/>')
.append('default ' + event.type + ' prevented')
.appendTo('#log');
});
</script>

</body>
</html>

jQuery $(document).ready vs $(window).load

jQuery offers two powerful methods to execute code and attach event handlers: $(document).ready and $(window).load. The document ready event executes when the HTML-Document is loaded and the DOM is ready, even if all the graphics haven’t loaded yet. If you want to hook up your events for certain elements before the window loads, then $(document).ready is the right place.

$(document).ready(function() {
// executes when HTML-Document is loaded and DOM is ready
alert("DOM is ready");
});

The window load event executes a bit later when the page is fully loaded, including all frames, objects and images. Therefore functions which access images or other page contents should be placed in the load event for the window.

$(window).load(function() {
// executes when the page is fully loaded, including all frames, objects and images
alert("Page is fully loaded now");
});

Friday 12 November 2010

jQuery unload() method

The unload event is sent to the window element when the user navigates away from the page. This could mean one of many things. The user could have clicked on a link to leave the page, or typed in a new URL in the address bar. The forward and back buttons will trigger the event. Closing the browser window will cause the event to be triggered. Even a page reload will first create an unload event.

Any unload event handler should be bound to the window object:

$(window).unload(function() {
alert('See you again');
});

After this code executes, the alert will be displayed whenever the browser leaves the current page. It is not possible to cancel the unload event with .preventDefault(). This event is available so that scripts can perform cleanup when the user leaves the page.

jQury API

jQuery .submit() method

The submit event is sent to an element when the user is attempting to submit a form. It can only be attached to <form> elements. Forms can be submitted either by clicking an explicit <input type="submit">, <input type="image">, or <button type="submit">, or by pressing Enter when certain form element has focus.

Depending on the browser, the Enter key may only cause a form submission if the form has exactly one text field, or only when there is a submit button present. The interface should not rely on a particular behavior for this key unless the issue is forced by observing the keypress event for presses of the Enter key.

For example, consider the HTML:

<form id="target" action="destination.html">
<input type="text" value="Hello there" />
<input type="submit" value="Go" />
</form>
<div id="other">
Click here to trigger the handler
</div>

The event handler can be bound to the form:

$('#target').submit(function() {
alert('It is from submit() handler');
return false;
});

ow when the form is submitted, the message is alerted. This happens prior to the actual submission, so we can cancel the submit action by calling .preventDefault() on the event object or by returning false from our handler. We can trigger the event manually when another element is clicked:

$('#other').click(function() {
$('#target').submit();
});

jQuery API

Friday 5 November 2010

Defer loading of JavaScript

Deferring loading of JavaScript functions that are not called at startup reduces the initial download size, allowing other resources to be downloaded in parallel, and speeding up execution and rendering time.

Like stylesheets, scripts must be downloaded, parsed, and executed before the browser can begin to render a web page. Again, even if a script is contained in an external file that is cached, processing of all elements below the script is blocked until the browser loads the code from disk and executes it. However, for some browsers, the situation is worse than for stylesheets: while JavaScript is being processed, the browser blocks all other resources from being downloaded. For AJAX-type applications that use many bytes of JavaScript code, this can add considerable latency.

For many script-intensive applications, the bulk of the JavaScript code handles user-initiated events, such as mouse-clicking and dragging, form entry and submission, hidden elements expansion, and so on. All of these user-triggered events occur after the page is loaded and the onload event is triggered. Therefore, much of the delay in the "critical path" (the time to load the main page at startup) could be avoided by deferring the loading of the JavaScript until it's actually needed. While this "lazy" method of loading doesn't reduce the total JS payload, it can significantly reduce the number of bytes needed to load the initial state of the page, and allows the remaining bytes to be loaded asynchronously in the background.

To use this technique, you should first identify all of the JavaScript functions that are not actually used by the document before the onload event. For any file containing more than 25 uncalled functions, move all of those functions to a separate, external JS file. This may require some refactoring of your code to work around dependencies between files. (For files containing fewer than 25 uncalled functions, it's not worth the effort of refactoring.)

Then, you insert a JavaScript event listener in the head of the containing document that forces the external file to be loaded after the onload event. You can do this by any of the usual scripting means, but we recommend a very simple scripted DOM element (to avoid cross-browser and same-domain policy issues). Here's an example (where "deferredfunctions.js" contains the functions to be lazily loaded):

<script type="text/javascript">

// Add a script element as a child of the body
function downloadJSAtOnload() {
var element = document.createElement("script");
element.src = "deferredfunctions.js";
document.body.appendChild(element);
}

// Check for browser support of event handling capability
if (window.addEventListener)
window.addEventListener("load", downloadJSAtOnload, false);
else if (window.attachEvent)
window.attachEvent("onload", downloadJSAtOnload);
else window.onload = downloadJSAtOnload;

/script>

Visit Google Page Speed to read more tips about WebSite performance optimization.

Friday 29 October 2010

View Installed Plugins, Cache Information, and Advanced Config in FireFox

Plugins
Just type in about:plugins in the addressbar and press enter.

Cache
Just type in about:cache in the addressbar and press enter.

Advanced Configuration Settings
Just type in about:config in the addressbar and press enter.

Tuesday 26 October 2010

Javascript proxy generator uses full URL in set_path in .NET 4

The javascript proxy generator for WCF services used to send just the path as the argument to set_path in .NET 3.5, such as:

service.set_path("/service.svc");

You can see a line of code similar to this if you go through the proxy javascript code.

In .NET 4, this has changed to the full request absolute url:

service.set_path("http://host/service.svc");

I have noticed this change recently when we had an issue with this in IE8. I was getting "Access Denied" exception when the site is requested via HTTPS and the service method is called on click of a button in the page. After spending some time to debug the JavaScript using IE8s built-in developer tool I found the reason for the error is this particular line of code.

Th work around is if you have control over calling the service through the proxy, call set_path on the proxy with the correct path before making the service call as below.

Service.set_path("/ProxyService.svc");
Service.CallMethod(arg, CallBackResult, CallBackError, context);

Read more >>

Read more >>

Monday 25 October 2010

CSS Hacks

Dealing with browser inconsistencies often makes up a majority of the work for a web designer. Sometimes there is no reasonable way to accomplish a desired layout in all major web browsers without the use of some special exception rules for certain layout engines. Hacks necessarily lead to potential complications and should be avoided whenever possible, but when the circumstances require hacks to be used, it's best to know what your options are and weigh the consequences appropriately.

Using Bugs to Your Advantage



Some bugs are better than others. In this case, CSS parsing bugs can help us target specific versions of IE using a specially crafted selector:

* for IE7, prepend any rule with *+html,
* for IE6, prepend any rule with * html.

An example of a stylesheet containing rules for IE7 and IE6 compatibility:

1. div.highlight {
2. background: red;
3. float: left;
4. margin-right: 10px;
5. outline: 1px solid blue;
6. }
7.
8. /* IE7 doesn't support outline, use border instead */
9. *+html div.highlight {
10. border: 1px solid blue;
11. margin: -1px;
12. margin-right: 9px;
13. }
14.
15. /* IE6 needs to fix doubled margin bug */
16. * html div.highlight {
17. display: inline;
18. }

So far, there has been no CSS parsing bugs identified that would help us target IE8 using a selector only. What we can do, however, is use declaration parsing bugs to target specific versions:

* to target IE8 in standards mode, use /*\**/ before the colon and apply the \9 suffix to the value declaration,
* to target IE8 and below, use \9 before terminating a CSS value declaration,
* to target IE7 and below, use the * prefix before a CSS property declaration,
* to target IE6, use the _ (underscore) prefix before a CSS property declatarion

To illustrate, let's apply a few of these rules just for fun:

1. .myClass {
2. color: black; /* normal CSS declaration */
3. color /*\**/: red\9; /* IE8 standards mode */
4. color: green\9; /* IE8 and below */
5. *color: blue; /* IE7 and below */
6. _color: purple; /* IE6 */
7. }

Using CSS hacks enables us to apply fixes without the need for conditional comments, but at a cost – hacks usually don't validate, are hard to understand without additional comments and can be confusing for IDEs, depending on their validation and syntax highlighting implementations.

Given the choice between CSS hacks and conditional comments, I always pick the latter as my go-to method when dealing with browser version targeting. It has served me and my clients well in the past and continue to make my code more standards-oriented, readable and future proof – who knows when or if these hacks will start interfering with newer or other vendors' browser parsers. Granted, if all you need is to fix a declaration or two, you can still use this in your existing style sheets, but as soon as that number grows beyond, say, a half a dozen, you should consider creating a separate style sheet.

Hacks vs. conditional comments



So what's the best way to address browser inconsistencies? Well, from my experience, I've decided on a strategy that seems to work best: let the oldest browser carry the burden of compatibility.

Remember, you are building websites that, ideally, should not need periodic check-ups and redevelopment (save for the obligatory redesigns, of course). So why should the browsers of tomorrow carry the burden of the browsers of yesteryear?

The way I've tackled this problem is including the following <head> structure:

1. <link rel="stylesheet" href="bridging.css"/>
2. <!--[if lte IE 8]>
3. <link rel="stylesheet" href="ie8.css"/>
4. <![endif]-->
5. <!--[if lte IE 7]>
6. <link rel="stylesheet" href="ie7.css"/>
7. <![endif]-->
8. <!--[if lte IE 6]>
9. <link rel="stylesheet" href="ie6.css"/>
10. <![endif]-->

In this case, IE6 will load all four style sheets (including additional @imports), and each subsequent version of IE will load one less. The beauty of this is that all CSS bugs are dealt with in a backwardly fashion, meaning rules need not be overridden for posterity. And when support for a specific version is dropped, so can the conditional comment, reducing the CSS code needed to maintain the website.

Click here for more information.




http://www.webdevout.net/css-hacks

1. Conditional comments
2. In-CSS hacks

    1. Easy selectors
    2. Minimized attribute selectors
    3. !important
    4. @import "non-ie.css" all;
    5. body[class|="page-body"]

3. Unrecommended hacks

    1. _property: value and -property: value
    2. *property: value
    3. body:empty
    4. a:link:visited, a:visited:link
    5. >body
    6. html*
    7. !ie
    8. !important!

4. Third party translations

CSS Hacks for IE6,IE7,IE8,IE9 and IE10

Wednesday 13 October 2010

URL Encoding

Some characters cannot be part of a URL (for example, the space) and some other characters have a special meaning in a URL: for example, the character # can be used to specify a subsection (or fragment) of a document; the character = is used to separate a name from a value. A query string may need to be converted (that is what URL Encoding is) to satisfy these constraints.

In particular, encoding the query string uses the following rules:

* Letters (A-Z and a-z), numbers (0-9) and the characters '.','-','~' and '_' are left as-is
* SPACE is encoded as '+'
* All other characters are encoded as %FF hex representation with any non-ASCII characters first encoded as UTF-8 (or other specified encoding)

The octet corresponding to the tilde ("~") character is often encoded as "%7E" by older URI processing implementations; the "%7E" can be replaced by"~" without changing its interpretation.

The encoding of SPACE as '+' and the selection of "as-is" characters distinguishes this encoding from RFC 1738.

Technically, the form content is only encoded as a query string when the form submission method is GET. The same encoding is used by default when the submission method is POST, but the result is not sent as a query string, that is, is not added to the action URL of the form. Rather, the string is sent as the body of the request.

http://en.wikipedia.org/wiki/Query_string

http://en.wikipedia.org/wiki/URL_encoding

There are two built-in methods in ASP.NET which can be used to encode a string or URL. They are Server.URLEncode()and Server.URLPathEncode().

Server.URLPathEncode method
URL-encodes the path portion of a URL string and returns the encoded string. It will leave the querystring, if present, as it is.

The Server.URLEncode method
The URLEncode method applies URL encoding rules, including escape characters, to a specified string.

URLEncode converts characters as follows:
* Spaces ( ) are converted to plus signs (+).
* Non-alphanumeric characters are escaped to their hexadecimal representation.

Browser URL encoding and website request validation

The Default View Source Editor Has Changed in Internet Explorer 8

When you click the View Source command in Internet Explorer 8, it uses the built-in viewer, which is part of the Developer Tools in Internet Explorer 8. The built-in viewer lets you dynamically refresh the view source window, increase or decrease the text size and lists other standard options.

Changing the View Source Editor in Internet Explorer

Internet Explorer 8 includes a built-in option to change the default view source editor. The setting is provided in the Developer Tools.

1. Open Internet Explorer

2. Press the F12 button to start the Developer Tools

3. From the File menu, click Customize Internet Explorer View Source

4. Select one of the following options:

* Default Viewer
* Notepad
* Other…

Wednesday 6 October 2010

Sitemaps

The Sitemaps protocol allows a webmaster to inform search engines about URLs on a website that are available for crawling. A Sitemap is an XML file that lists the URLs for a site. It allows webmasters to include additional information about each URL: when it was last updated, how often it changes, and how important it is in relation to other URLs in the site. This allows search engines to crawl the site more intelligently. Sitemaps are a URL inclusion protocol and complement robots.txt, a URL exclusion protocol.

he webmaster can generate a Sitemap containing all accessible URLs on the site and submit it to search engines. Since Google, Bing, Yahoo, and Ask use the same protocol now, having a Sitemap would let the biggest search engines have the updated pages information.

Sitemaps supplement and do not replace the existing crawl-based mechanisms that search engines already use to discover URLs. Using this protocol does not guarantee that web pages will be included in search indexes, nor does it influence the way that pages are ranked in search results.

File format



The Sitemap Protocol format consists of XML tags. The file itself must be UTF-8 encoded. Sitemaps can also be just a plain text list of URLs. They can also be compressed in .gz format.

A sample Sitemap that contains just one URL and uses all optional tags is shown below.

<?xml version='1.0' encoding='UTF-8'?>
<urlset>
<url>
<loc>http://princepthomas.blogspot.com
<lastmod>2010-10-06
<changefreq>daily
<priority>0.5
</url>
</urlset>

Thursday 26 August 2010

How the Z-index Attribute Works for HTML Elements

There are many ways to classify elements on a Web page. For the purposes of this article and the z-index attribute, we can divide them into two categories: windowed and windowless.

Windowed Elements

* <OBJECT> tag elements
* ActiveX controls
* Plug-ins
* Dynamic HTML (DHTML) Scriptlets
* SELECT elements
* IFRAMEs in Internet Explorer 5.01 and earlier

Windowless Elements

* Windowless ActiveX controls
* IFRAMEs in Internet Explorer 5.5 and later
* Most DHTML elements, such as hyperlinks or tables

All windowed elements paint themselves on top of all windowless elements, despite the wishes of their container. However, windowed elements do follow the z-index attribute with respect to each other, just as windowless elements follow the z-index attribute with respect to each other.

All windowless elements are rendered on the same MSHTML plane, and windowed elements draw on a separate MSHTML plane. You can use z-index to manipulate elements on the same plane but not to mix and match with elements in different planes. You can rearrange the z-indexing of the elements on each plane, but the windowed plane always draws on the top of the windowless plane.

http://support.microsoft.com/kb/177378

How to Use the Canonical Tag

Google, Yahoo & Microsoft Unite On “Canonical Tag” To Reduce Duplicate Content Clutter

The web is full of duplicate content. Search engines try to index and display the original or “canonical” version. Searchers only want to see one version in results. And site owners worry that if search engines find multiple versions of a page, their link credit will be diluted and they’ll lose ranking.

Today, Google, Yahoo and Microsoft (links are to their separate announcements) have united to offer a way to reduce duplicate content clutter and make things easier for everyone. Webmasters rejoice! Worried about duplicate content on your site? Want to know what “canonical” means? Read on for more details.

Multiple URLs, one page

Duplicate content comes in different forms, but a major scenario is multiple URLs that point to the same page. This can come up for lots of reasons. An ecommerce site might allow various sort orders for a page (by lowest price, highest rated…), the marketing department might want tracking codes added to URLs for analytics. You could end up with 100 pages, but 10 URLs for each page. Suddenly search engines have to sort through 1,000 URLs.

This can be a problem for a couple of reasons.

* Less of the site may get crawled. Search engine crawlers use a limited amount of bandwidth on each site (based on numerous factors). If the crawler only is able to crawl 100 pages of your site in a single visit, you want it to be 100 unique pages, not 10 pages 10 times each.

* Each page may not get full link credit. If a page has 10 URLs that point to it, then other sites can link to it 10 different ways. One link to each URL dilutes the value the page could have if all 10 links pointed to a single URL.

Using the new canonical tag

Specify the canonical version using a tag in the head section of the page as follows:

<link rel="canonical" href="http://www.example.com/product.php?item=swedish-fish"/>

That’s it!

* You can only use the tag on pages within a single site (subdomains and subfolders are fine).
* You can use relative or absolute links, but the search engines recommend absolute links.

This tag will operate in a similar way to a 301 redirect for all URLs that display the page with this tag.

* Links to all URLs will be consolidated to the one specified as canonical.
* Search engines will consider this URL a “strong hint” as to the one to crawl and index.

Canonical URL best practices

The search engines use this as a hint, not as a directive, (Google calls it a “suggestion that we honor strongly”) but are more likely to use it if the URLs use best practices, such as:

* The content rendered for each URL is very similar or exact
* The canonical URL is the shortest version
* The URL uses easy to understand parameter patterns (such as using ? and %)

Can this be abused by spammers? They might try, but Matt Cutts of Google told me that the same safeguards that prevent abuse by other methods (such as redirects) are in place here as well, and that Google reserves the right to take action on sites that are using the tag to manipulate search engines and violate search engine guidelines.

For instance, this tag will only work with very similar or identical content, so you can’t use it to send all of the link value from the less important pages of your site to the more important ones.

If tags conflict (such as pages point to each other as canonical, the URL specified as canonical redirects to a non-canonical version, or the page specified as canonical doesn’t exist), search engines will sort things out just as they do now, and will determine which URL they think is the best canonical version.

For more info visit
http://searchengineland.com/canonical-tag-16537
http://googlewebmastercentral.blogspot.com/2007/09/google-duplicate-content-caused-by-url.html
http://googlewebmastercentral.blogspot.com/2009/02/specify-your-canonical.html
How to Use the Canonical Tag

Regular Expressions

A regular expression (regex or regexp for short) is a special text string for describing a search pattern. You can think of regular expressions as wildcards on steroids. You are probably familiar with wildcard notations such as *.txt to find all text files in a file manager. The regex equivalent is .*\.txt$.

Some Definitions

We are going to be using the terms literal, metacharacter, target string, escape sequence and search string in this overview. Here is a definition of our terms:

literal A literal is any character we use in a search or matching expression, for example, to find ind in windows the ind is a literal string - each character plays a part in the search, it is literally the string we want to find.

metacharacter A metacharacter is one or more special characters that have a unique meaning and are NOT used as literals in the search expression, for example, the character ^ (circumflex or caret) is a metacharacter.

escape sequence An escape sequence is a way of indicating that we want to use one of our metacharacters as a literal. In a regular expression an escape sequence involves placing the metacharacter \ (backslash) in front of the metacharacter that we want to use as a literal, for example, if we want to find ^ind in w^indow then we use the search string \^ind and if we want to find \\file in the string c:\\file then we would need to use the search string \\\\file (each \ we want to search for (a literal) is preceded by an escape sequence \).

target string This term describes the string that we will be searching, that is, the string in which we want to find our match or search pattern.

search expression This term describes the expression that we will be using to search our target string, that is, the pattern we use to find what we want.

Brackets, Ranges and Negation

Bracket expressions introduce our first metacharacters, in this case the square brackets which allow us to define list of things to test for rather than the single characters we have been checking up until now. These lists can be grouped into what are known as Character Classes typically comprising well know groups such as all numbers etc.

[ ] Match anything inside the square brackets for one character position once and only once, for example, [12] means match the target to either 1 or 2 while [0123456789] means match to any character in the range 0 to 9.

- The - (dash) inside square brackets is the 'range separator' and allows us to define a range, in our example above of [0123456789] we could rewrite it as [0-9].

You can define more than one range inside a list e.g. [0-9A-C] means check for 0 to 9 and A to C (but not a to c).

NOTE: To test for - inside brackets (as a literal) it must come first or last, that is, [-0-9] will test for - and 0 to 9.

^ The ^ (circumflex or caret) inside square brackets negates the expression (we will see an alternate use for the circumflex/caret outside square brackets later), for example, [^Ff] means anything except upper or lower case F and [^a-z] means everything except lower case a to z.

NOTE:Spaces, or in this case the lack of them, between ranges are very important.

Positioning (or Anchors)

^ The ^ (circumflex or caret) outside square brackets means look only at the beginning of the target string, for example, ^Win will not find Windows in STRING1 but ^Moz will find Mozilla.

$ The $ (dollar) means look only at the end of the target string, for example, fox$ will find a match in 'silver fox' since it appears at the end of the string but not in 'the fox jumped over the moon'.

. The . (period) means any character(s) in this position, for example, ton. will find tons and tonneau but not wanton because it has no following character.

Iteration 'metacharacters'

The following is a set of iteration metacharacters (a.k.a. quantifiers) that can control the number of times a character or string is found in our searches.

? The ? (question mark) matches the preceding character 0 or 1 times only, for example, colou?r will find both color and colour.

* The * (asterisk or star) matches the preceding character 0 or more times, for example, tre* will find tree and tread and trough.

+ The + (plus) matches the previous character 1 or more times, for example, tre+ will find tree and tread but not trough.

More 'metacharacters'

The following is a set of additional metacharacters that provide added power to our searches:

() The ( (open parenthesis) and ) (close parenthesis) may be used to group (or bind) parts of our search expression together.

"MSIE.(5\.[5-9])|([6-9])" matches MSIE 5.5 (or greater) OR MSIE 6+.

| The | (vertical bar or pipe) is called alternation in techspeak and means find the left hand OR right values, for example, gr(a|e)y will find 'gray' or 'grey'.

Common Extensions and Abbreviations

Character Class Abbreviations

\d Match any character in the range 0 - 9
\D Match any character NOT in the range 0 - 9
\s Match any whitespace characters (space, tab etc.).
\S Match any character NOT whitespace (space, tab).
\w Match any character in the range 0 - 9, A - Z and a - z
\W Match any character NOT the range 0 - 9, A - Z and a - z

Positional Abbreviations

\b Word boundary. Match any character(s) at the beginning (\bxx) and/or end (xx\b) of a word, thus \bton\b will find ton but not tons, but \bton will find tons.
\B Not word boundary. Match any character(s) NOT at the beginning(\Bxx) and/or end (xx\B) of a word, thus \Bton\B will find wantons but not tons, but ton\B will find both wantons and tons.

See Regular Expressions - User guide for more information.

http://www.regular-expressions.info/

Monday 26 July 2010

Unload Event in ASP.NET Page Life Cycle

The Unload event is raised after the page has been fully rendered, sent to the client, and is ready to be discarded. At this point, page properties such as Response and Request are unloaded and cleanup is performed.

During the unload stage, the page and its controls have been rendered, so you cannot make further changes to the response stream. If you attempt to call a method such as the Response.Write method, the page will throw an exception.

protected override void OnUnload(EventArgs e)
{
base.OnUnload(e);

// your code
}


ASP.NET Page Life Cycle Overview

General Page Life-Cycle Stages

Page request
The page request occurs before the page life cycle begins. When the page is requested by a user, ASP.NET determines whether the page needs to be parsed and compiled (therefore beginning the life of a page), or whether a cached version of the page can be sent in response without running the page.

Start
In the start stage, page properties such as Request and Response are set. At this stage, the page also determines whether the request is a postback or a new request and sets the IsPostBack property. The page also sets the UICulture property.

Initialization
During page initialization, controls on the page are available and each control's UniqueID property is set. A master page and themes are also applied to the page if applicable. If the current request is a postback, the postback data has not yet been loaded and control property values have not been restored to the values from view state.

Load
During load, if the current request is a postback, control properties are loaded with information recovered from view state and control state.

Postback event handling
If the request is a postback, control event handlers are called. After that, the Validate method of all validator controls is called, which sets the IsValid property of individual validator controls and of the page.

Rendering
Before rendering, view state is saved for the page and all controls. During the rendering stage, the page calls the Render method for each control, providing a text writer that writes its output to the OutputStream object of the page's Response property.

Unload
The Unload event is raised after the page has been fully rendered, sent to the client, and is ready to be discarded. At this point, page properties such as Response and Request are unloaded and cleanup is performed.

Life-Cycle Events

PreInit

Use this event for the following:

* Check the IsPostBack property to determine whether this is the first time the page is being processed. The IsCallback and IsCrossPagePostBack properties have also been set at this time.
* Create or re-create dynamic controls.
* Set a master page dynamically.
* Set the Theme property dynamically.
* Read or set profile property values.

***If the request is a postback, the values of the controls have not yet been restored from view state. If you set a control property at this stage, its value might be overwritten in the next event.

Init
Raised after all controls have been initialized and any skin settings have been applied. The Init event of individual controls occurs before the Init event of the page.

Use this event to read or initialize control properties.

InitComplete
Raised at the end of the page's initialization stage. Only one operation takes place between the Init and InitComplete events: tracking of view state changes is turned on. View state tracking enables controls to persist any values that are programmatically added to the ViewState collection. Until view state tracking is turned on, any values added to view state are lost across postbacks. Controls typically turn on view state tracking immediately after they raise their Init event.

Use this event to make changes to view state that you want to make sure are persisted after the next postback.

Preload
Raised after the page loads view state for itself and all controls, and after it processes postback data that is included with the Request instance.

Load
The Page object calls the OnLoad method on the Page object, and then recursively does the same for each child control until the page and all controls are loaded. The Load event of individual controls occurs after the Load event of the page.

Use the OnLoad event method to set properties in controls and to establish database connections.

Control events
Use these events to handle specific control events, such as a Button control's Click event or a TextBox control's TextChanged event.
Note : In a postback request, if the page contains validator controls, check the IsValid property of the Page and of individual validation controls before performing any processing.

LoadComplete
Raised at the end of the event-handling stage.

Use this event for tasks that require that all other controls on the page be loaded.

PreRender

Raised after the Page object has created all controls that are required in order to render the page, including child controls of composite controls. (To do this, the Page object calls EnsureChildControls for each control and for the page.)

The Page object raises the PreRender event on the Page object, and then recursively does the same for each child control. The PreRender event of individual controls occurs after the PreRender event of the page.

Use the event to make final changes to the contents of the page or its controls before the rendering stage begins.

PreRenderComplete
Raised after each data bound control whose DataSourceID property is set calls its DataBind method. For more information, see Data Binding Events for Data-Bound Controls later in this topic.

SaveStateComplete
Raised after view state and control state have been saved for the page and for all controls. Any changes to the page or controls at this point affect rendering, but the changes will not be retrieved on the next postback.

Render
This is not an event; instead, at this stage of processing, the Page object calls this method on each control. All ASP.NET Web server controls have a Render method that writes out the control's markup to send to the browser.

If you create a custom control, you typically override this method to output the control's markup. However, if your custom control incorporates only standard ASP.NET Web server controls and no custom markup, you do not need to override the Render method. For more information, see Developing Custom ASP.NET Server Controls.

A user control (an .ascx file) automatically incorporates rendering, so you do not need to explicitly render the control in code.

Unload
Raised for each control and then for the page.

In controls, use this event to do final cleanup for specific controls, such as closing control-specific database connections.

For the page itself, use this event to do final cleanup work, such as closing open files and database connections, or finishing up logging or other request-specific tasks.

Note : During the unload stage, the page and its controls have been rendered, so you cannot make further changes to the response stream. If you attempt to call a method such as the Response.Write method, the page will throw an exception.

Tuesday 20 July 2010

ASP.NET 2.0 Wizard Control

One of the useful new controls in ASP.NET 2.0 is the <asp:wizard> control, which allows developers to easily create multi-step UI (with built-in previous/next functionality and state management of values).

There is a nice 14 minute online video now available that walks through how to build an ASP.NET 2.0 application from scratch that provides a customer online signup form system using the <asp:wizard> control, the asp.net validation controls, and the new System.Net.Mail mail library. You can watch it being built from scratch and learn the high-level concepts of how the Wizard control works here (to find other short task-focused videos in the new ASP.NET 2.0 "How Do I" series click here).

Here are a few other articles you can read to learn more about the <asp:wizard> control and how to take advantage of it:

* MSDN Magazine Cutting Edge Article (note: this is a little old -- but provides a good conceptual overview)
* ASP.NET QuickStart Samples for the Wizard Control
* Create a Basic Wizard Control
* Create an Advanced Wizard Control
* Wizard Control MSDN Reference Overview

One nice tip/trick you can use with the Wizard control is to host it within the new <atlas:updatepanel> control -- which turns the wizard into an Ajax based wizard (no full page post-backs required). This is trivial to-do and doesn't require any code changes. This blog post of Scott Gu talks a little about using the <atlas:updatepanel> with the december CTP drop of Atlas, and builds an Ajax task-list in 39 lines of code with it (no javascript required -- it can all be done with C#). You can learn more about the January CTP drop (which has a lot of enhancements and new features for the <atlas:updatepanel> here).

http://weblogs.asp.net/scottgu/archive/2006/02/21/438732.aspx

Wednesday 7 July 2010

Explicit and Implicit Interface Implementation

A class that implements an interface can explicitly implement a member of that interface. When a member is explicitly implemented, it cannot be accessed through a class instance, but only through an instance of the interface.

Explicit interface implementation also allows the programmer to inherit two interfaces that share the same member names and give each interface member a separate implementation.


http://msdn.microsoft.com/en-us/library/aa288461(VS.71).aspx

Generating Forms Authentication Compatible Passwords (SHA1)

Why would we want to create an SHA1 Password Hash?
The answer to this is easy. It is dangerous to store passwords anywhere in plain text!! SHA1 gives a quick and easy way to encode a password into a non-human readable form. This means it is safer to store in a database, and should the database be viewed by anyone who shouldn't know the passwords, it will be much more difficult for them to work out what a user's password is.

When creating a Web Application we can use the HashPasswordForStoringInConfigFile object in the FormsAuthentication namespace to generate our SHA1 password hash.

The following section of code shows an example of this:

Dim encpass As String = _
FormsAuthentication.HashPasswordForStoringInConfigFile(tbxPassword.Text, _
"sha1")
tbxResult.Text = encpass.ToString()

The code takes the text from the "thePassword" textbox control and hashes the contents with the SHA1 algorithm. The result is then in the "theResult" textbox control.

This hashed password can then be placed in your web.config file or in a database and used in your web application for Forms Authentication. In a future tutorial we will show how to go on and use this in an application.


http://www.stardeveloper.com/articles/display.html?article=2003062001&page=1

Cookieless Forms Authentication

ASP.NET 2.0 supports cookieless forms authentication. This feature is controlled by the cookieless attribute of the forms element. This attribute can be set to one of the following four values:

* UseCookies. This value forces the FormsAuthenticationModule class to use cookies for transmitting the authentication ticket.
* UseUri. This value directs the FormsAuthenticationModule class to rewrite the URL for transmitting the authentication ticket.
* UseDeviceProfile. This value directs the FormsAuthenticationModule class to look at the browser capabilities. If the browser supports cookies, then cookies are used; otherwise, the URL is rewritten.
* AutoDetect. This value directs the FormsAuthenticationModule class to detect whether the browser supports cookies through a dynamic detection mechanism. If the detection logic indicates that cookies are not supported, then the URL is rewritten.

If your application is configured to use cookieless forms authentication and the FormsAuthentication.RedirectFromLoginPage method is being used, then the FormsAuthenticationModule class automatically sets the forms authentication ticket in the URL. The following code example shows what a typical URL looks like after it has been rewritten:

http://localhost/CookielessFormsAuthTest/(F(-k9DcsrIY4CAW81Rbju8KRnJ5o_gOQe0I1E_jNJLYm74izyOJK8GWdfoebgePJTEws0Pci7fHgTOUFTJe9jvgA2))/Test.aspx


The section of the URL that is in parentheses contains the data that the cookie would usually contain. This data is removed by ASP.NET during request processing. This step is performed by the ASP.NET ISAPI filter and not in an HttpModule class. If you read the Request.Path property from an .aspx page, you won't see any of the extra information in the URL. If you redirect the request, the URL will be rewritten automatically.

Note It is not possible to secure authentication tickets contained in URLs. When security is paramount, you should use cookies to store authentication tickets.

http://msdn.microsoft.com/en-us/library/ff647070.aspx

Friday 25 June 2010

How to enable "View Souce" option on secured (HTTPS) / encrypted pages in IE

Internet Explorer saves a lot of web site information and data in temporary files for faster retrieval in the future. For most web sites that isn't a problem, but saving encrypted web pages that should be secure to a temp file on your disk can pose a security risk. This tip will show you how to enable/disable the saving of encrypted web pages.

1. On the menu bar click on Tools and select Internet Options

2. Click on the Advanced tab

3. Scroll through the list to the Security section at the bottom

4. Uncheck the box next to the option "Do Not Save Encrypted Pages To Disk"

By default it is checked which disables the "View Source" option.

5. Click OK to close the window and initiate the changes

I.e. if "Do Not Save Encrypted Pages To Disk" is checked "View Source" option will be disabled when an encrypted page is rendered. To investigate the HTML output you have to uncheck the above mentioned option.

Enabing Script Debugging in IE

1. Click the "Tools" menu
2. Click "Internet Options"
3. Click "Advanced"
4. Make sure "Disable Script Debugging (Internet Explorer)" is unchecked.

How to enable third-party cookies in your web browser

Internet Explorer 7 and Internet Explorer 8

1. Click the "Tools" menu
2. Click "Internet Options"
3. Select the "Privacy" tab
4. Click "Advanced"
5. Select "Override automatic cookie handling"
6. Select the "Accept" button under "Third-party Cookies" and click "OK"

Internet Explorer 6

1. Click the "Tools" menu
2. Click "Internet Options"
3. Select the "Privacy" tab
4. Move the settings slider to "Low" or "Accept all cookies"
5. Click "OK"

Firefox 3 and Firefox 3.5

1. Click the "Tools" menu
2. Click "Options..."
3. Select the "Privacy" menu
4. Make sure "Keep until" is set to "they expire"
5. Make sure "Accept third-party cookies" is checked

Firefox 2

1. Click the "Tools" menu
2. Click "Options..."
3. Select the "Privacy" menu
4. Make sure "Accept cookies from sites" is checked
5. Make sure "Keep until" is set to "they expire"

Safari

1. Click the "Safari" menu
2. Click "Preferences..."
3. Click the "Security" menu
4. For "Accept cookies" select "Always"

Google Chrome

1. Select the Wrench (spanner) icon at the top right
2. Select "Options"
3. Select the "Under the Bonnet" tab
4. Click "Content Settings" button
5. Select "Cookies" tab
6. Make sure "Block all third-party cookies without exception" is unchecked

Opera 9

1. Click the "Tools" menu
2. Click "Preferences..."
3. Click the "Advanced" tab
4. Select "Cookies" on the left list
5. Make sure "Accept cookies" is selected and uncheck "Delete new cookies when exiting Opera"
6. Click "OK"

Thursday 24 June 2010

PCI Scanning

PCI Scanning stands for "Payment Card Industry" scanning. It involves having a PCI ASV (Approved Scanning Vendor) scan any and all IP addresses that the public has access to, related to your website or your site's transaction process.

Basically, when your merchant account provider or bank asks you to conduct a PCI Scan, they are asking you to ensure that all IP addresses that feed into or out from your site are clean and virus-free.

PCI stands for Payment Card Industry. A group, known as the PCI council consists of the five major credit card companies. They came up with a set of security standards in order to ensure that there is consistency throughout when processing credit cards.

If you are a merchant or service provider and accept credit cards you must confirm PCI compliance at least once a year. In order to be PCI compliant, network security scans, or PCI scans, are mandatory for all merchants and service providers that collect, process, or transmit payment card account information.

So what exactly is PCI scanning? It is when an ASV (Approved Scanning Vendor) scans your website to check for any vulnerabilities. All PCI scans must be conducted by a third party compliant network security scanning vendor. The scanning usually includes your websites IP address, but if you transfer your customers to a third-party shopping cart during the checkout process, then you should include their IP address to be scanned as well. This is very important because you could be held responsible if anyone gets a hold of your client's payment card information anywhere along the transaction process.

http://hubpages.com/hub/what-is-pci-scanning

Using jQuery pseudo selectors

jQuery offers a powerful set of tools for matching a set of elements in a document.

If you wish to use any of the meta-characters (#;&,.+*~':"!^$[]()=>|/ ) as a literal part of a name, you must escape the character with two backslashes: \\. For example, if you have an an input with name="names[]", you can use the selector $("input[name=names\\[\\]]").

Attribute Ends With Selector [name$=value]

jQuery('[attribute$=value]')

where

attribute = An attribute name.
value = An attribute value. Quotes are optional.

Description: Selects elements that have the specified attribute with a value ending exactly with a given string.

Example

<input name="newsletter" />
<input name="milkman" />
<input name="jobletter" />
<script>$("input[name$='letter']").val("a letter");</script>

:first Selector

Description: Selects the first matched element.

The :first pseudo-class is equivalent to :eq(0). It could also be written as :lt(1). While this matches only a single element, :first-child can match more than one: One for each parent.

Example
<table>
<tr>Row 1
<tr>Row 2
<tr>Row 3
</table>
<script>$("tr:first").css("font-style", "italic");</script>


http://api.jquery.com/category/selectors/

Tuesday 22 June 2010

Everything You Need to Know About Response.Redirect

Basics of Response.Redirect

When you request a page from a web server, the response you get has some headers at the top, followed by the body of the page. When viewed in your browser the headers are never seen, but are used by the browser application. I have the following page called test.asp

<%@ Language=VBScript %>
<HTML>
<HEAD>
<META NAME="GENERATOR" Content="Microsoft Visual Studio 6.0">
</HEAD>
<BODY>

<p>Hello



</BODY>
</HTML>

When I request that from the web server this is the reply I get;

HTTP/1.1 200 OK
Server: Microsoft-IIS/5.0
Date: Mon, 19 Mar 2001 15:07:44 GMT
Connection: close
Content-Length: 134
Content-Type: text/html
Set-Cookie: ASPSESSIONIDQQGQQJWO=OMCJFABDNCDLLBKAPNHJBKHD; path=/
Cache-control: private


<HTML>
<HEAD>
<META NAME="GENERATOR" Content="Microsoft Visual Studio 6.0">
</HEAD>
<BODY>

<p>Hello



</BODY>
</HTML>

http://pubs.logicalexpressions.com/pub0009/lpmarticle.asp?id=214

Server.Transfer Vs. Response.Redirect

Server.Transfer() doesn't end the current request, it only instructs ASP.NET to stop rendering the current page and start rendering the new page instead. The client is none the wiser, from its point of view the server is still responding to the initial request, so the URL displayed in the address bar does not change.

  • The page transferred to should be another .aspx page. For instance, a transfer to an .asp or .asmx page is not valid.
  • The Transfer method preserves the QueryString and Form collections.
  • Transfer calls Response.End(), which throws a ThreadAbortException exception upon completion.

Response.Redirect() ends the current request and sends a 302 response code to the client. The client then issues another HTTP request to the redirected URL and processes the response. Since the client knows that the URL has changed, it displays the redirected URL in its address bar.

Response.Redirect simply sends a message down to the browser, telling it to move to another page.

Server.Transfer is similar in that it sends the user to another page with a statement such as Server.Transfer("newpage.aspx"). However, the statement has a number of distinct advantages and disadvantages.

Firstly, transferring to another page using Server.Transfer conserves server resources. Instead of telling the browser to redirect, it simply changes the "focus" on the Web server and transfers the request. This means you don't get quite as many HTTP requests coming through, which therefore eases the pressure on your Web server and makes your applications run faster.

But watch out: because the "transfer" process can work on only those sites running on the server, you can't use Server.Transfer to send the user to an external site. Only Response.Redirect can do that.

Secondly, Server.Transfer maintains the original URL in the browser. This can really help streamline data entry techniques, although it may make for confusion when debugging.

That's not all: The Server.Transfer method also has a second parameter—"preserveForm". If you set this to True, using a statement such as Server.Transfer("newpage.aspx", True), the existing query string and any form variables will still be available to the page you are transferring to.

For example, if your newpage.aspx has a TextBox control called TextBox1 and you transferred to WebForm2.aspx with the preserveForm parameter set to True, you'd be able to retrieve the value of the original page TextBox control by referencing Request.Form("TextBox1").

This technique is great for wizard-style input forms split over multiple pages. But there's another thing you'll want to watch out for when using the preserveForm parameter. ASP.NET has a bug whereby, in certain situations, an error will occur when attempting to transfer the form and query string values. You'll find this documented at http://support.microsoft.com/default.aspx?id=kb;en-us;Q316920.

The unofficial solution is to set the enableViewStateMac property to True on the page you'll be transferring to, then set it back to False. This records that you want a definitive False value for this property and resolves the bug.

So, in brief: Response.Redirect simply tells the browser to visit another page. Server.Transfer helps reduce server requests, keeps the URL the same and, with a little bug-bashing, allows you to transfer the query string and form variables.

Top Tip: Don't confuse Server.Transfer with Server.Execute, which executes the page and returns the results. It was useful in the past, but, with ASP.NET, it's been replaced with fresher methods of development. Ignore it.

ViewState and its role in ASP.NET page processing

Processing Page Requests
When an initial request for a page (a Web Form) is received by ASP.NET, it locates and loads the requested Web Form (and if necessary compiles the code). It is important to understand the sequence of events that occurs when a Web Forms page is processed. This knowledge will help you program your Web Forms pages and Web applications more effectively.

As described before, initial page requests are relatively simple. The real work gets done when a page is submitted to itself - and a postback request is generated. Here are a few notes on postback requests:

* The current value of every control on a Web Form is contained in the postback request. This is referred to as the Post Data
* The content of the ViewState is also contained in the Post Data. ViewState holds the original property values of every control on a Web Form - before the user made any changes
* If a postback was caused, for example, by a button click, Post Data is used to identify the button that caused the postback


Postback Event Processing Sequence
Here are the events (and the order) that are raised when a Button is clicked and a postback occurs:

1. Page.Init + Control.Init for every control on the Web Form
The first stage in the page life cycle is initialization. After the page's control tree is populated with all the statically declared controls in the .aspx source the Init event is fired. First, the Init event for the Page object occurs, then Init event occurs for each control on the Page. Viewstate information is not available at this stage.
2. Page.LoadViewState
After initialization, ASP.NET loads the view state for the page. ViewState contains the state of the controls the last time the page was processed on the server.
3. Page.ProcessPostData
Post Data gets read from the request and control values are applied to control initalized in stage 1.
4. Page.Load + Control.Load for each control on the Page
If this is the first time the page is being processed (Page.IsPostback property), initial data binding is performed here.
5. "Change" events are fired for controls (TextChanged, SelectedIndexChanged, and similar)
The current value (from Post Data) is compared to the original value located in the ViewState. If there is a difference "Changed" events are raised.
6. Server-side events are fired for any validation controls
7. Button.Click + Button.Command
The Click and Command events are fired for the button that caused the postback
8. Page.PreRender + Control.PreRender
9. Page.SaveViewState
New values for all the controls are saved to the view state for another round-trip to the server.
10. Page.Render

As you can see from the postback steps, the ViewState has a major role in ASP.NET. Viewstate is a collection of name/value pairs, where control's and page itself store information that is persistent among web requests.

* The ASP.NET Page Life Cycle
* The Role of View State
* The Cost of View State
* How View State is Serialized/Deserialized
* Specifying Where to Store the View State Information (see how to store it in a file on the Web server rather than as a bloated hidden form field)
* Programmatically Parsing the View State
* View State and Security Implications

Follow the MSDN link to read more
http://msdn.microsoft.com/en-us/library/ms972976

ViewSate vs PostBack Data

Why do some Web controls like Textbox retain values even after disabling the ViewState while others do not?

Let’s build a simple Web application to examine how ViewState works.

Create a blank Web project and paste the code given below in the page:

<script runat="server">
Protected Sub btnSubmit_Click(ByVal sender As Object, ByVal e As System.EventArgs)
Handles btnSubmit.Click
lblMessage.Text = "Goodbye everyone"
lblMessage1.Text = "Goodbye everyone"
txtMessage.Text = "Goodbye everyone"
txtMessage1.Text = "Goodbye everyone"
End Sub
</script>
<form id="form1" runat="server">

<asp:Label runat="server" ID="lblMessage" EnableViewState =true
Text="Hello World"></asp:Label>
<asp:Label runat="server" ID="lblMessage1" EnableViewState =false
Text="Hello World"></asp:Label>
<asp:Textbox runat="server" ID="txtMessage" EnableViewState =true
Text="Hello World"></asp:Textbox>
<asp:Textbox runat="server" ID="txtMessage1" EnableViewState =false
Text="Hello World"></asp:Textbox>
<br />
<asp:Button runat="server"
Text="Change Message" ID="btnSubmit"></asp:Button>
<br />
<asp:Button ID="btnEmptyPostBack" runat="server" Text="Empty Postback">
</form>

The page rendered will have four controls (two text boxes and two labels) initialized with Hello World and two buttons.

Click on the Change Message button, the value in controls will be changed to Goodbye Everyone.

Now click on the Empty Postback button.

The expected result is, after postback the Textbox (txtMessage) and label (lblMessage) with EnableViewState = false should not retain the value and hence the value should be Hello world, while the controls with ViewState enabled (txtMessage1 and lblMessage1) should retain the value and hence value should be Goodbye world.

But this does not happen. Both the Textbox will maintain the value irrespective of whether ViewState is enabled or disabled, but in the case of label control if ViewState is disabled, the value we changed programmatically is not retained.

Let's examine why this happens?

Page LifeCycle and ViewState

In page life cycle, two events are associated with ViewState:

* Load View State: This stage follows the initialization stage of page lifecycle. During this stage, ViewState information saved in the previous postback is loaded into controls. As there is no need to check and load previous data, when the page is loaded for the first time this stage will not happen. On subsequent postback of the page as there may be previous data for the controls, the page will go through this stage.
* Save View State: This stage precedes the render stage of the page. During this stage, current state (value) of controls is serialized into 64 bit encoded string and persisted in the hidden control (__ViewState) in the page.
* Load Postback Data stage: Though this stage has nothing to do with ViewState, it causes most of the misconception among developers. This stage only happens when the page has been posted back. ASP.NET controls which implement IPostBackEventHandler will update its value (state) from the appropriate postback data. The important things to note about this stage are as follows:

1. State (value) of controls are NOT retrieved from ViewState but from posted back form.
2. Page class will hand over the posted back data to only those controls which implement IPostBackEventHandler.
3. This stage follows the Load View State stage, in other words state of controls set during the Load View State stage will be overwritten in this stage.

Why some controls retain values even after disabling the ViewState while others do not?


The answer is Controls which implements IPostBackEventHandler like Textbox, Checkbox, etc. will retain the state even after disabling the viewstate. The reason is during the Load Postback Data stage, these controls will get state information from Posted back form.

But controls like label which do not implement IPostBackEventHandler will not get any state information from posted back data and hence depend entirely on viewstate to maintain the state.

**An interesting behavior is if we make a control which implements IPostBackEventHandler interface disabled then the ASP.NET will not process the control during postback. So in the above sample, if we make the Textbox (one with EnableViewState = false) disabled then it will not retain the changed value and behave like a label control.


http://www.codeproject.com/KB/aspnet/ASPViewStateandPostBack.aspx

Friday 18 June 2010

JavaScript Cookies

Writing and Reading cookies using JavaScript
http://www.w3schools.com/js/js_cookies.asp

<html>
<head>
<script type="text/javascript">
function getCookie(c_name)
{
if (document.cookie.length>0)
{
c_start=document.cookie.indexOf(c_name + "=");
if (c_start!=-1)
{
c_start=c_start + c_name.length+1;
c_end=document.cookie.indexOf(";",c_start);
if (c_end==-1) c_end=document.cookie.length;
return unescape(document.cookie.substring(c_start,c_end));
}
}
return "";
}

function setCookie(c_name,value,expiredays)
{
var exdate=new Date();
exdate.setDate(exdate.getDate()+expiredays);
document.cookie=c_name+ "=" +escape(value)+
((expiredays==null) ? "" : ";expires="+exdate.toUTCString());

}

function checkCookie()
{
username=getCookie('username');
if (username!=null && username!="")
{
alert('Welcome again '+username+'!');
}
else
{
username=prompt('Please enter your name:',"");
if (username!=null && username!="")
{
setCookie('username',username,365);
}
}
}
</script>
</head>

<body onload="checkCookie()">
</body>
</html>

Preventing Users From Copying Text From and Pasting It Into TextBoxes

Copy, Cut, and Paste Events in JavaScript
Much like ASP.NET, JavaScript code is typically event-driven, meaning that certain blocks of JavaScript execute in response to particular events. For example, you can run a block of script when the web page loads, when a form is submitted, or when an HTML element is clicked. If you are not familiar with JavaScript's event model, check out Introduction to Events, which gives a great overview of how event handling works in JavaScript.

JavaScript includes events that fire when the user attempts to copy, cut, or paste from within the browser window: copy, cut, and paste. What's more, by creating an event handler for these events you can write script that cancels the default behavior, meaning that with just a few lines of JavaScript you can "turn off" copying, cutting, and/or pasting behavior from within the browser. The copy, cut, and paste events are supported in most modern browsers, including Internet Explorer
5.5 and up, Firefox 3.0 and up, Safari 3.0 and up, and Chrome, although the support differs a bit between the various browsers. For instance, Firefox, Safari, and Chrome will fire the copy, cut, and paste events if the user copies, cuts, or pastes, anywhere in the document, but Internet Explorer only fires these events if the copy, paste, and cut occurs within a form, on an HTML element, within a text input, or on an image. (For more information on the browser compatibility for these events, refer to the cut, copy, paste Event Compatibility Matrix.)

Remarks:
1. You cannot truly prevent someone from copying, cutting, or pasting. The techniques we'll examine in this article show how to use JavaScript to put up a roadblock to copying, cutting, and pasting. However, a determined user could disable JavaScript in their browser, at which point the JavaScript you've written to prevent copying, cutting, and pasting is moot.
2. Preventing copying, cutting, and pasting can lead to a jarring and frustrating user experience. No matter what car you get into, when you turn the key you expect the to start. Imagine how frustrated you would become if you rented a car, hopped in, turned the key, and nothing happened. The same sentiment exists for computer user interfaces. Users expect certain functionality to be available when they sit down at a keyboard. They expect Ctrl+C to copy the selected contents to the clipboard, and Ctrl+V to paste. Disabling these comfortable and well-known idioms can unnerve users.

For these reasons, I would only recommend disabling copy and paste operations in specific circumstances where the benefits outweigh the negatives. Furthermore, I'd suggest giving users some sort of obvious and visual feedback when they attempt to copy or paste so that they understand these operations have been disabled and that it's not some software bug or hardware failure at play.

Using jQuery to Disable Copy and Paste

<script type="text/javascript">
$(document).ready(function () {
$('input[type=text]').bind('copy paste', function (e) {
e.preventDefault();
});
});
</script>

<script type="text/javascript">
$(document).ready(function () {
$('#id_of_textbox').bind('paste', function (e) {
e.preventDefault();
alert("You cannot paste text into this textbox!");
});
});
</script>

<script type="text/javascript">
$(document).ready(function () {
$('#<%=txtEmail.ClientID%>').bind('copy', function (e) {
e.preventDefault();
});


$('#<%=txtConfirmEmail.ClientID%>').bind('paste', function (e) {
e.preventDefault();
});
});
</script>

When a user attempts to copy or paste, "message" <div>'s text is set to an appropriate message, it is positioned to the right of the textbox, and is faded in over the course of three seconds, after which it fades out over the course of 1.5 seconds.

<script type="text/javascript">
$(document).ready(function () {
$('#<%=txtEmail.ClientID%>').bind('copy', function (e) {
e.preventDefault();

$('#message').text("You cannot copy the text from this textbox...")
.css(
{
left: 20 + $(this).offset().left + $(this).width() + 'px',
top: $(this).offset().top + 'px'
})
.fadeIn(3000, function () { $(this).fadeOut(1500) });
});
});
</script>

http://www.4guysfromrolla.com/articles/060910-1.aspx

Method to read Request Query String / Form Parameters

public string RequestParam(string ParamName)
{
string Result = String.Empty;

if (Context.Request.Form.Count != 0)
{
Result = Convert.ToString(Context.Request.Form[ParamName]);
}
else if (Context.Request.QueryString.Count != 0)
{
Result = Convert.ToString(Context.Request.QueryString[ParamName]);
}

return (Result==null) ? String.Empty : Result.Trim();
}

Managing User Account Creation

Many websites that support user account allow anyone to create a new account, but require new users to undergo some form of verification before their account is activated. A common approach is to send an email to the newly created user with a link that, when visited, activates their account. This approach ensures that the email address entered by the user is valid (since it is sent to that user's email address). This workflow not only ensures the valid data entry, but also helps deter automated spam
bots and abusive users.

In past installments of this article series we've seen how to use the CreateUserWizard control to allow users to create new accounts. By default, the user accounts created by the CreateUserWizard control are activated; new users can login immediately and start interacting with the site. This default behavior can be customized, however, so that new accounts are disabled. A disabled user cannot log into the site; therefore, there needs to be some manner by which a newly created user can have her account enabled.

There are many ways by which an account may be activated. You could have each account manually verified by an administrative user. If your site requires users to pay some sort of monthly fee or annual due, you could have the account approved once the payment was successfully processed. As aforementioned, one very common approach is to require the user to visit a link sent to the email address they entered when logging on.

This article explores this latter technique.

Wednesday 2 June 2010

CAPTCHA

A CAPTCHA or Captcha is a type of challenge-response test used in computing to ensure that the response is not generated by a computer. The process usually involves one computer (a server) asking a user to complete a simple test which the computer is able to generate and grade. Because other computers are unable to solve the CAPTCHA, any user entering a correct solution is presumed to be human. Thus, it is sometimes described as a reverse Turing test, because it is administered by a machine and targeted to a human, in contrast to the standard Turing test that is typically administered by a human and targeted to a machine. A common type of CAPTCHA requires that the user type letters or digits from a distorted image that appears on the screen.

The term "CAPTCHA" (based upon the word capture) was coined in 2000 by Luis von Ahn, Manuel Blum, Nicholas J. Hopper, and John Langford (all of Carnegie Mellon University). It is a contrived acronym for "Completely Automated Public Turing test to tell Computers and Humans Apart." Carnegie Mellon University attempted to trademark the term, but the trademark application was abandoned on 21 April 2008.

http://en.wikipedia.org/wiki/CAPTCHA

Free CAPTCHA ASP.NET Control

HTTP Status Codes

When a request is made to your server for a page on your site (for instance, when a user accesses your page in a browser or when Googlebot crawls the page), your server returns an HTTP status code in response to the request.

This status code provides information about the status of the request. This status code gives browser/Googlebot information about your site and the requested page.

Some common status codes are:

* 200 - the server successfully returned the page
* 404 - the requested page doesn't exist
* 503 - the server is temporarily unavailable

Visit W3C page on HTTP status codes for more information


1xx (Provisional response)


Status codes that indicate a provisional response and require the requestor to take action to continue.

100 (Continue)
The requestor should continue with the request. The server returns this code to indicate that it has received the first part of a request and is waiting for the rest.
101 (Switching protocols)
The requestor has asked the server to switch protocols and the server is acknowledging that it will do so.

2xx (Successful)

Status codes that indicate that the server successfully processed the request.

200 (Successful)
The server successfully processed the request. Generally, this means that the server provided the requested page. If you see this status for your robots.txt file, it means that Googlebot retrieved it successfully.
204 (No content)
The server successfully processed the request, but isn't returning any content.
206 (Partial content)
The server successfully processed a partial GET request.

3xx (Redirected)

Further action is needed to fulfill the request. Often, these status codes are used for redirection. Google recommends that you use fewer than five redirects for each request. You can use Webmaster Tools to see if Googlebot is having trouble crawling your redirected pages. The Crawl errors page under Diagnostics lists URLs that Googlebot was unable to crawl due to redirect errors.

300 (Multiple choices) The server has several actions available based on the request. The server may choose an action based on the requestor (user agent) or the server may present a list so the requestor can choose an action.

4xx (Request error)
These status codes indicate that there was likely an error in the request which prevented the server from being able to process it.

400 (Bad request) The server didn't understand the syntax of the request.

5xx (Server error)
These status codes indicate that the server had an internal error when trying to process the request. These errors tend to be with the server itself, not with the request.

500 (Internal server error) The server encountered an error and can't fulfill the request.

Click here to read more.

Monday 17 May 2010

HTML <base> Tag

The <base> tag specifies a default address or a default target for all links on a page. Relative links within a document (such as <a href="someplace.html"... or <img src="someimage.jpg"...) will become relative to the URI specified in the base tag irrespective of what is present in the address bar.

The <base> tag goes inside the head element.

The <base> tag is supported in all major browsers.

Differences Between HTML and XHTML

In HTML the <base> tag has no end tag.

In XHTML the <base> tag must be properly closed.

Friday 14 May 2010

Organic search results vs non-organic

Organic search results are listings on search engine results pages that appear because of their relevance to the search terms, as opposed to their being advertisements. In contrast, non-organic search results may include pay per click advertising.

http://en.wikipedia.org/wiki/Organic_search

Calling Page_ClientValidate() from a custom JavaScript function

Page_ClientValidate() is the ASP.NET built-in JavaScript function which is executed to validate the form against validation controls before a postback. Page_ClientValidate() sets Page_IsValid property to false if a validation fails and stops the postback.

Page_ClientValidate() can be called manually from a custom script function to make sure the form passes the validation.

But there can be issues like alerting the error messages twice when this function is called from a custom fuction and onClick of the postback button (which happens automatically).

ViewState - in simple terms

The ViewState is just an encrypted version of what was last sent down, so that next time there is a post to the server, it can all be sent back up as one thing. It can then be used on the server, to see if a field's value changed, etc.

The browser does not care about ViewState, and does not use it to populate the controls.

Password fields lose value during PostBack

This is a security feature. So the password isn't sent down to the browser in clear text.

Monday 10 May 2010

Age calculation JavaScript

var yourAge = currentYearYYYY - dobYearYYYY;

if ((currentMonth - 1 < dobMonth - 1) || (currentMonth - 1 == dobMonth - 1 && currentDay < dobDay))
yourAge --;

----------------------
The following script also will work, but it ignores leap years.

var yourAge = ((currentDate - DOB) / (24*60*60*1000 * 365)) //Remove 365 to calculate DateDiff

SSL Offloading

SSL offloading relieves a Web server of the processing burden of encrypting and/or decrypting traffic sent via SSL, the security protocol that is implemented in every Web browser. The processing is offloaded to a separate device designed specifically to perform SSL acceleration or SSL termination.

SSL termination capability is particularly useful when used in conjunction with clusters of SSL VPNs, because it greatly increases the number of connections a cluster can handle.

BIG-IP® Local Traffic Manager with the SSL Acceleration Feature Module performs SSL offloading.

http://www.f5.com/glossary/ssl-offloading.html

Tuesday 20 April 2010

Binding List<String > or String Array to Repeater

<asp:Repeater ID="Repeater1" runat="server">
<ItemTemplate>
<%# Container.DataItem %>
<br />
</ItemTemplate>
</asp:Repeater>

You can see that there is nothing special in the HTML source code to display Array Items in Repeater Control ItemTemplate. Instead of something like
<%# DataBinder.Eval(Container.DataItem, "Price", "{0:c}") %>
<%# Container.DataItem %>

is used to call the item from the iteration process of Repeater Control. Because here Container.DataItem itself is the item to display, not something within the Container.DataItem.

string[] arr1 = new string[] {"array item 1","array item 2", };
Repeater1.DataSource = arr1;
Repeater1.DataBind();

Thursday 15 April 2010

How to register an HTTP handler for IIS 7.0 running in Integrated Mode

#Compile the HTTP handler class and copy the resulting assembly to the Bin folder under the application's root folder.
-or-
Put the source code for the handler into the application's App_Code folder.

For an example of an HTTP handler, see Walkthrough: Creating a Synchronous HTTP Handler.

#In the application's Web.config file, create a handlers element in the system.webServer section.

Note

Handlers that are defined in the httpHandlers element are not used. If you do not remove the httpHandlers registrations, you must set the validation element’s validateIntegratedModeConfiguration attribute to false in order to avoid errors. The validation element is a child element of the system.webServer element. For more information, see "Disabling the migration error message" in ASP.NET Integration with IIS 7.0.

The following example shows how to register an HTTP handler that responds to requests for the SampleHandler.new resource. The handler is defined as the class SampleHandler in the assembly SampleHandlerAssembly.

<configuration>
<system.webServer>
<handlers>
<add name="SampleHandler" verb="*"
path="SampleHandler.new"
type="SampleHandler, SampleHandlerAssembly"
resourceType="Unspecified" />
</handlers>
<system.webServer>
</configuration>

Note

The resourceType attribute performs the same function as the Verify file exists option in IIS manager for IIS 6.0.

Read More>>

Find the Position of an Element using jQuery

$('#id').position().top;
$('#id').position().left;

Thursday 8 April 2010

A/B testing and Multivariate testing

A/B testing or bucket testing is a method of marketing testing by which a baseline control sample is compared to a variety of single-variable test samples in order to improve response rates. A classic direct mail tactic, this method has been recently adopted within the interactive space to test tactics such as banner ads, emails and landing pages.

Significant improvements can be seen through testing elements like copy text, layouts, images and colors. However, not all elements produce the same improvements, and by looking at the results from different tests, it is possible to identify those elements that consistently tend to produce the greatest improvements.

Employers of this A/B testing method will distribute multiple samples of a test, including the control, to see which single variable is most effective in increasing a response rate or other desired outcome. The test, in order to be effective, must reach an audience of a sufficient size that there is a reasonable chance of detecting a meaningful difference between the control and other tactics: see Statistical power.

This method is different to multivariate testing which applies statistical modeling which allows a tester to try multiple variables within the samples distributed.

Companies well-known for using A/B testing

Amazon.com, Google, Microsoft, Ebay, Yahoo

Multivariate testing

In statistics, multivariate testing or multi-variable testing is a technique for testing hypotheses on complex multi-variable systems, especially used in testing market perceptions.

In internet marketing, multivariate testing is a process by which more than one component of a website may be tested in a live environment. It can be thought of in simple terms as numerous A/B tests performed on one page at the same time. A/B tests are usually performed to determine the better of two content variations, multivariate testing can theoretically test the effectiveness of limitless combinations. The only limits on the number of combinations and the number of variables in a multivariate test are the amount of time it will take to get a statistically valid sample of visitors and computational power.

Multivariate testing is usually employed in order to ascertain which content or creative variation produces the best improvement in the defined goals of a website, whether that be user registrations or successful completion of a checkout process (that is, conversion rate). Dramatic increases can be seen through testing different copy text, form layouts and even landing page images and background colours. However, not all elements produce the same increase in conversions, and by looking at the results from different tests, it is possible to identify those elements that consistently tend to produce the greatest increase in conversions.

Multivariate testing is currently an area of high growth in internet marketing as it helps website owners to ensure that they are getting the most from the visitors arriving at their site. Areas such as search engine optimization and pay per click advertising bring visitors to a site and have been extensively used by many organisations but multivariate testing allows internet marketeers to ensure that visitors are being shown the right offers, content and layout to convert them to sale, registration or the desired action once they arrive at the website.

There are two principal approaches used to achieve multivariate testing on websites. One being Page Tagging; a process where the website creator inserts Javascript into the site to inject content variants and monitor visitor response. Page tagging typically tracks what a visitor viewed on the website and for how long that visitor remained on the site together with any click or conversion related actions performed. Page tagging usually needs to be done by a technical team and typically cannot be accomplished by a web marketer.[3]. Later refinements on this method allow for a single common tag to be deployed across all pages, reducing deployment time and removing the need for re-deployment between tests.

Companies known to employ a tag based method of multivariate testing are: Conversion Works, Adobe, Business Intelligence Group GmbH (B.I.G.), Amadesa, DIVOLUTION, Maxymiser, Google Website Optimizer, Vertster and Autonomy Corporation

The second principal approach used does not require page tagging. By establishing a DNS-proxy or hosting within a website's own datacenter, it is possible to intercept and process all web traffic to and from the site undergoing testing, insert variants and monitor visitor response. In this case, all logic sits server rather than browser-side and after initial DNS changes are made, no further technical involvement is required from the website point of view. SiteSpect is known to employ this method of implementation.

Multivariate testing can also be applied to email body content and mobile web pages.

A data center or datacenter (or datacentre), also called a server farm, is a facility used to house computer systems and associated components, such as telecommunications and storage systems. It generally includes redundant or backup power supplies, redundant data communications connections, environmental controls (e.g., air conditioning, fire suppression) and security devices.

Conversion rate

In internet marketing, conversion rate is the ratio of visitors who convert casual content views or website visits into desired actions based on subtle or direct requests from marketers, advertisers, and content creators. The Conversion rate is defined as follows:

Conversion rate = Number of Goal Achievements \ Visits

Successful conversions are interpreted differently by individual marketers, advertisers, and content creators. To online retailers, for example, a successful conversion may constitute the sale of a product to a consumer whose interest in the item was initially sparked by clicking a banner advertisement. To content creators, however, a successful conversion may refer to a membership registration, newsletter subscription, software download, or other activity that occurs due to a subtle or direct request from the content creator for the visitor to take the action.


Internet marketing, also referred to as i-marketing, web-marketing, online-marketing, Search Engine Marketing (SEM) or e-Marketing, is the marketing of products or services over the Internet.