Common .htaccess Redirects

I’ve recently been redirecting one website content to another. Here are some useful common .htaccess file redirects.

#301 Redirects for .htaccess
 
#Redirect a single page:
Redirect 301 /pagename.php http://www.domain.com/pagename.html
 
#Redirect an entire site:
Redirect 301 / http://www.domain.com/
 
#Redirect an entire site to a sub folder
Redirect 301 / http://www.domain.com/subfolder/
 
#Redirect a sub folder to another site
Redirect 301 /subfolder http://www.domain.com/
 
#This will redirect any file with the .html extension to use the same filename but use the .php extension instead.
RedirectMatch 301 (.*)\.html$ http://www.domain.com$1.php
 
##
#You can also perform 301 redirects using rewriting via .htaccess.
##
 
#Redirect from old domain to new domain
RewriteEngine on
RewriteBase /
RewriteRule (.*) http://www.newdomain.com/$1 [R=301,L]
 
#Redirect to www location
RewriteEngine on
RewriteBase /
rewritecond %{http_host} ^domain.com [nc]
rewriterule ^(.*)$ http://www.domain.com/$1 [r=301,nc]
 
#Redirect to www location with subdirectory
RewriteEngine on
RewriteBase /
RewriteCond %{HTTP_HOST} domain.com [NC]
RewriteRule ^(.*)$ http://www.domain.com/directory/index.html [R=301,NC]
 
#Redirect from old domain to new domain with full path and query string:
Options +FollowSymLinks
RewriteEngine On
RewriteRule ^(.*) http://www.newdomain.com%{REQUEST_URI} [R=302,NC]
 
#Redirect from old domain with subdirectory to new domain w/o subdirectory including full path and query string:
Options +FollowSymLinks
RewriteEngine On
RewriteCond %{REQUEST_URI} ^/subdirname/(.*)$
RewriteRule ^(.*) http://www.katcode.com/%1 [R=302,NC]
 
Rewrite and redirect URLs with query parameters (files placed in root directory)
 
Original URL:

http://www.example.com/index.php?id=1

Desired destination URL:

http://www.example.com/path-to-new-location/

.htaccess syntax:
 
RewriteEngine on
RewriteCond %{QUERY_STRING} id=1
RewriteRule ^index\.php$ /path-to-new-location/? [L,R=301]
Redirect URLs with query parameters (files placed in subdirectory)
 
Original URL:

http://www.example.com/sub-dir/index.php?id=1

Desired destination URL:

http://www.example.com/path-to-new-location/

.htaccess syntax:
 
RewriteEngine on
RewriteCond %{QUERY_STRING} id=1
RewriteRule ^sub-dir/index\.php$ /path-to-new-location/? [L,R=301]
Redirect one clean URL to a new clean URL
 
Original URL:

http://www.example.com/old-page/

Desired destination URL:

http://www.example.com/new-page/

.htaccess syntax:
 
RewriteEngine On
RewriteRule ^old-page/?$ $1/new-page$2 [R=301,L]
Rewrite and redirect URLs with query parameter to directory based structure, retaining query string in URL root level
 
Original URL:

http://www.example.com/index.php?id=100

Desired destination URL:

http://www.example.com/100/

.htaccess syntax:
 
RewriteEngine On
RewriteRule ^([^/d]+)/?$ index.php?id=$1 [QSA]
Rewrite URLs with query parameter to directory based structure, retaining query string parameter in URL subdirectory
 
Original URL:

http://www.example.com/index.php?category=fish

Desired destination URL:

http://www.example.com/category/fish/

.htaccess syntax:
 
RewriteEngine On
RewriteRule ^/?category/([^/d]+)/?$ index.php?category=$1 [L,QSA]
Domain change – redirect all incoming request from old to new domain (retain path)
 
RewriteEngine on
RewriteCond %{HTTP_HOST} ^example-old\.com$ [NC]
RewriteRule ^(.*)$ http://www.example-new.com/$1 [R=301,L]
If you do not want to pass the path in the request to the new domain, change the last row to:
 
RewriteRule ^(.*)$ http://www.example-new.com/ [R=301,L]
 
#From blog.oldsite.com -> www.somewhere.com/blog/
retains path and query, and eliminates xtra blog path if domain is blog.oldsite.com/blog/
Options +FollowSymLinks
RewriteEngine On
RewriteCond %{REQUEST_URI}/ blog
RewriteRule ^(.*) http://www.somewhere.com/%{REQUEST_URI} [R=302,NC]
RewriteRule ^(.*) http://www.somewhere.com/blog/%{REQUEST_URI} [R=302,NC]

Source: https://gist.github.com/ScottPhillips/1721489

Bing Code Search for C# in Visual Studio

Ask a C# related question and get snippets inside your IDE.

Write your question as a special comment starting with three slashes ///. Your question may mention variables.Press the Tab key on the same line to get answers; press Enter to insert selected snippet.

via Bing Code Search for C# in Visual Studio.

An introduction to the MEAN developer’s stack

Welcome to get MEAN (MongoDb, ExpressJS, Angular.js and Node.js). Here’s a few great tutorials and resources to get you started with MEAN on Windows. Best of all, you can use Webmatrix 3 from Microsoft, which is entirely free.

First, What is MEAN? Valerbi Karpos of MongoDB’s post on MEAN here defining the stack.  I believe he’s credited with being the originator of the phrase -http://thecodebarbarian.wordpress.com/2013/04/29/easy-web-prototyping-with-mongodb-and-nodejs/ . Basically MEAN is a pure JavaScript stack conglomeration for full spectrum development.  Components areof MongoDB, ExpressJS (sometimes BackboneJS), AngularJS, and Node.Js. M-E-A-N. Valerbi does a good job of spelling out some of the synergies present.

So here’s my favorite links in each of the MEAN categories.  Enjoy.  Got extras talking about Azure + MEAN?  Post them in the comments and lets build the article base.

Via Azure – Why you gotta be so MEAN?

WebMatrix ships with site templates that make it easy to get started. The WebMatrix Node.js starter template is a starting point for a full Node.js application—it shows examples of routing, middleware, custom errors, and more. The WebMatrix Node.js starter template is built on express, a flexible framework for building web applications. For more information on express, visit http://expressjs.com.

Note: A full explanation of Node.js or express is beyond the scope of this article. For information on Node.js and express, see http://nodejs.org/ and http://expressjs.com.

via How to Use the Node.js Starter Template in WebMatrix

This article will guide you to start building up a website using Node.js using the MEAN stack. I will try to also help you with setting up your basic tools/infrastructure for developing the application like setting up node.js, mongodb etc. I am assuming you have some basic knowledge about Node.js and Javascript along with HTML etc. However even if you are new to node.js or the other technologies involved don’t worry, as the article involves a couple of different technologies, I’ll just try to scratch the surface.

What does the MEAN acronym stand for?

  • M – MongoDB (NoSQL document store database)
  • E – Express (Web framework for use in Node.js)
  • A – Angular.js (Client side Javascript MVC framework)
  • N – Node.js

The advantage of using this MEAN stack is all the components are very robust and popular and Javascript is used as the common language on both the client and server side. Also Node.js and MongoDB couple together very well.

I will define the popular technology stack below by category which is popularly used. I might not be using all the technology stated below in details but knowing the entire stack would help know what fits where.

Technology Stack Classified

  • Client
    • HTML5/CSS3
    • Angular.js as MVC framework
    • Javascript/Jquery
    • Bootstrap for responsive design
  • Server
    • Node.js
  • Data Access and ORM
    • Mongoose
  • Database
    • MongoDB

Keep in mind that although we are using the term ORM above in Data access, these NoSQL databases doesn’t define any schema so Mongoose might be a bit different than the other Object Relational Mappers like NHibernate or Entity Framework.

Via Node.js introduction using MEAN stack with WebMatrix

In this blog post, I will demonstrate how to build a web app with Node.js and MongoDB, and will deploy it on Windows Azure as a Windows Azure Web Site. Firstly, I will create a web site with Node.js, Express.js, Mongoose and MongoDB. Then I will create a MongoDB database on MongoLab, which is a MongoDB as a service hosted on Cloud, and finally deploy the web app to Windows Azure Web Site. The source code for the demo app is available on Github at https://github.com/shijuvar/NodeExpressMongo

About the Demo Web App

This is a simple Task management application which provide the functionality for add, edit, delete and list out the Tasks. The home page will list out the uncompleted  Tasks and List page will list out all tasks.

Node.js modules for the web app

The following NPM modules will be used for this demo web app.

  1. Express.js – A light-weight web application framework for Node.js
  2. Mongoose – MongoDB object modeling framework for node.js
  3. Jade – A server-side view engine for Node.js web apps, which will be used with Express application

Via Building and Deploying Windows Azure Web Site with Node.js and MongoDB by using Microsoft WebMatrix

The empty site provides a very basic example of using an http server – the same sample that’s available on nodejs.org. The Express Site is a basic application generated using the scaffolding tool in the Node.js framework express. The Node Starter Site is where things start to get interesting. This boilerplate is hosted on GitHub, and shows how to implement sites that include parent/child layouts with jade, LESS css, logins with Twitter and Facebook, mobile layouts, and captcha. When you create a new application using any of these templates, WebMatrix 2 is going to ensure node, npm, and IISNode are installed on your system. If not, it will automatically install any missing dependencies. This feature is also particularly useful if you are building PHP/MySQL applications on Windows.

Via NODE.JS MEET WEBMATRIX 2

 

Great tips on SEO for business blogging

The key takeaway is that you shouldn’t split your domains (even subdomains like blog.domain.com). Ideally use a directory structure such as domain.com/blog/.

Think about your blog site directory structure. Make it easier to navigate for both users and search engine robots.

http://www.airpair.com/seo/seo-focused-wordpress-infrastructure

FluentMigrator timeout when adding a new column to a large table

I recently had the requirement to add a new column to a large but not massive table, which had over 12 million rows. I needed to allow logical deletes, so I needed to add a boolean (BIT) column to that table. Arguably, I should have created the table originally with such a column, but hindsight is always 20-20.

My FluentMigrator scripts was simple:

[Migration(201407221626)]
public class LogicalDeleteMyTable : Migration
{
	public override void Up()
	{
		// new column mimetype
		this.Alter.Table("MyTable")
			.AddColumn("IsDeleted")
			.AsBoolean()
			.NotNullable()
			.WithDefaultValue(0);
	}

	public override void Down()
	{
		this.Delete.Column("IsDeleted").FromTable("MyTable");
	}
}

We have a number of automatic deployments for database migration using FluentMigrator.NET. The first few deployments were running against small test databases. Our test database has a bit of data in it, but nothing like the volume in production.

Luckily we had decided to pull back a copy of production to our UAT environment for this deployment. I was also working on a few anonymization and data archiving scripts, so I had needed a copy of production anyway. This turned out to be our saving grace.

As I said earlier, the production table had just over 12.5 million rows in it. When the FluentMigrator process step kicked off in Octopus Deploy, the script eventually timed out. Rather than immediately try and rework the script, I decided to up the timeout. Digging around in the FluentMigrator.NET settings wiki, I found that Paul Stovell had very smartly added a SQL command timeout override (in seconds) as a command line runner option/flag/parameter:

migrate --conn "server=.\SQLEXPRESS;uid=testfm;pwd=test;Trusted_Connection=yes;database=FluentMigrator" --provider sqlserver2008 --assembly "..\Migrations\bin\Debug\Migrations.dll" --task migrate --timeout 300

I tried a few more times whilst continually extending the timeout value, but the runner still timed out. Finally I extended the timeout to 10 minutes (600 seconds) and the script completely successfully. Wheeew!

In a future post I intend to cover ways in which you can add new columns to extremely large columns without such a performance hit.

The story of AllowRowLocks equals false. When indexes go bad.

I had a bad day yesterday. It was a combination of factors that took a total of six years to appear. This is the story of indexes gone bad. All because of a single index flag – AllowRowLocks.

For years we had a database just worked, with a variety of applications connecting to it on a daily basis with a large number of users. Then a couple of months ago we changed the way our core application connected to the database. Boom… deadlocks, failed deletes, the pain just got worse and worse, and there was no obvious reason.

There were no clear exceptions were being logged in the event log. The cause was not obvious. We were testing the same release code in a multitude of testing and staging environments, and in every case the code worked. But it didn’t work in production. WTF? The code itself was simple. It was deleting a single row in the database via ADO.NET.

I watched the web application make the request back to the server, then saw no error, then watched the record seem to miraculously re-appear. It made no sense. Why wasn’t the record being deleted? Why was there no error?

I thought I was going crazy so I asked a colleague to do a code review with me. He thought it looked OK too, so he suggested we use SQL Profiler to see what was going on. We saw the TSQL batch go across. The delete was there, then the code retried the request 4 more times, then silently failed. What was going on? We decided to run the request ourselves manually. Interestingly it wasn’t doing what we expected:

DELETE FROM myTable WHERE Id = X

It was doing the delete with a ROWLOCK requested:

DELETE FROM myTable WITH (ROWLOCK) WHERE Id = x

Running this query directly gave us the following error:

Cannot use the ROW granularity hint on the table because locking at the specified granularity is inhibited.

That nice error (thanks Microsoft) basically means:

The WITH (ROWLOCK) query option is not compatible with ALLOWROWLOCKS=FALSE on a table index.

The fix is simple:

  1. Disable the index or change the index to enable row locks.
  2. Use page locks or table locks instead.

The general advice is that you should leave both row and page locking on unless you have a damn good reason not to, so that the SQL Server Database engine can work out its own locks. This diagram from MSDN shows the trade-off you are making when it comes to locking:

Why AllowRowLocks matters
Why AllowRowLocks matters

Needless to say, we had indexes that had forcibly switched row locks off. More detailed information concerning the different types of index locks can be seen a SQLServer-dba.com:

Question:

What does the ALLOW_ROW_LOCKS and ALLOW_PAGE_LOCKS mean on the CREATE INDEX statement ? What is the cost\benefit of ON|OFF? Is there a performance gain?

Answer:

  1. SQL Server takes locks at different levels – such as table, extent, page, row. ALLOW_PAGE_LOCKS and ALLOW_ROW_LOCKS decide on whether ROW or PAGE locks are taken.
  2. If ALLOW_PAGE_LOCKS = OFF, the lock manager will not take page locks on that index. The manager will only user row or table locks
  3. If ALLOW_ROW_LOCKS = OFF , the lock manager will not take row locks on that index. The manager will only use page or table locks.
  4. If ALLOW_PAGE_LOCKS = OFF and ALLOW_PAGE_LOCKS = OFF , locks are assigned at a table level only
  5. If ALLOW_PAGE_LOCKS = ON and ALLOW_PAGE_LOCKS = ON , SQL decides on which lock level to create according to the amount of rows and memory available.
  6. Consider these factors , when deciding to change the settings. There has to be an extremely good reason , backed up by some solid testing before you can justify changing to OFF

I found a nice bit of advice on StackOverflow from @Guffa concerning the use of WITH(ROWLOCK):

The with (rowlock) is a hint that instructs the database that it should keep locks on a row scope. That means that the database will avoid escalating locks to block or table scope. You use the hint when only a single or only a few rows will be affected by the query, to keep the lock from locking rows that will not be deleted by the query. That will let another query read unrelated rows at the same time instead of having to wait for the delete to complete. If you use it on a query that will delete a lot of rows, it may degrade the performance as the database will try to avoid escalating the locks to a larger scope, even if it would have been more efficient. The database is normally smart enough to handle the locks on it’s own. It might be useful if you are try to solve a specific problem, like deadlocks.

Another blogger (Robert Virag) at SQLApprentice states in his conclusion concerning AllowRowLocks and deadlock scenarios:

In case of high concurrency (especially writers) set ALLOW_PAGE_LOCK and ALLOW_ROW_LOCK to ON!

So how do you fix this, and on a large table is this going to cause me a timely index rebuild? You can use the procedure sp_indexoption to change the options on indexes, although this is due to be phased out in favour of ALTER INDEX (TSQL) after SQL Server 2014. The syntax to ALLOWROCKLOCKS looks like this:

ALTER INDEX IX_Customer_Region
ON DBO.Customer
SET
(
ALLOW_ROW_LOCKS = ON
);
GO

You can also identify any other indexes that have row locks switched off (ALLOW_ROW_LOCKS = 0):

SELECT
  name,
  type_desc,
  allow_row_locks,
  allow_page_locks
FROM sys.indexes
WHERE allow_row_locks = 0 -- OR allow_page_locks = 0 -- if you want

Now armed with that we can take a look at the statistics for each specific index:

DBCC SHOW_STATISTICS(Customer, IX_Customer_Region)

Notably, Thomas Stringer also notes that the BOL reference states:

Specifies index options without rebuilding or reorganizing the index

Job done. Now repeat for each problematic AllowRowLocks index. You could write a script to do them all.

Ben Powell is Microsoft .NET developer providing innovative solutions to common business to business integration problems. He has worked on projects for companies such as Dell Computer Corp, Visteon, British Gas, BP Amoco and Aviva Plc. He originates from Wales and now lives in Germany. He finds it odd to speak about himself in the third person.