Nice post. I hope, now I understand REST. 😛
Category / Regulars
All posts which are not featured in the sliding header. This will be used to make sure featured posts are not shown twice on the home page.
The Gamification of SAP – Forbes
The Gamification of SAP – Forbes.
This looks very interesting.
Migrate from Apache to Nginx: The new guide
`AG_BACKQUOTE_TURN_ON`
If you are here then most probably, like me you too want to migrate from Apache to Nginx. Well I have already migrated and I am loving it! You can get a quick recap of Apache vs Nginx comparison here. Via this blog post I will share and explain my Nginx conf, in the hope that this can prove helpful for you. I have assumed that you have Nginx and Php-FPM installed. You can read how to install Ngnix from here – http://goo.gl/tq6vT. I installed it from its source. You can install PHP-FPM as per the guide at – http://goo.gl/Fx9hJ. My repo had PHP-FPM so I was saved from this trouble.
The net is littered with loads of blog posts on this topic, but most of them are either out of date or make contradictory suggestions. I have scoured the net and cooked up my own Nginx config based on many helpful blog posts on them. My config has been progressively tweaked as per my needs. I now feel that it is good enough to be shared.
To see if my Nginx config suites your needs, you first need to understand my use case. I have four domains hosted on the same server – applegrew.com, cink.applegrew.com, fb.applegrew.com and this blog.
- applegrew.com – This serves some static HTML, few XMLs and dynamic web pages (PHP).
- cink.applegrew.com – This is my Chrome Experiment site and has no dynamic web pages, only static files like HTMLs, images, text files, etc.
- fb.applegrew.com – I added this one recently to host my Facebook Apps. So, naturally this has dynamic web pages, coded in PHP. Since, FB has mandated that from 1 Oct, 2011, all FB Apps must be accessible via HTTPS, so, this domain is configured to be accessible via both HTTP and HTTPS. The Nginx config for this takes care of setting the PHP parameter `$_SERVER[‘HTTPS’]` when HTTPS is used.
- blog.applegrew.com– Configuring Nginx for this blog was no easy task. This blog is powered by WordPress. If you too have a WordPress blog then you MUST install the following WP plugins for performance. The Nginx config that I have shared assumes that these plugins are installed and takes full advantage of it.
Must have WordPress plugins:-
- WP Super Cache– Excellent plugin which generates static HTML files for your blog. Nobody updates their blog every minute. It’s not a Twitter. So, why generate that same page again and again for every user who visits you blog? The solution is to cache the generate page. Later when any user visits your blog then that user will be served the cached page. This saves a ton of overhead. Particularly when you are using Nginx, since, we run PHP and Nginx processes separately. So we can configure Nginx to serve the generated file, if present, and completely bypass PHP. This plugin is smart enough to refresh the cache when you make a new post or update it.
Tip: Install this plugin after you have finalized your sites design, else you will have to manually clean the cache to make the site changes available.
- WP Minify– This plugin strips out all the JS and CSS links from your blog and then combines them generate a unified CSS and JS. The result is cut down on the number of requests to your server for additional CSS and JS files. This plugin also minifies the combined CSS and JS files, which produces a much smaller file.
Tip: If you install a new plugin after you install this one, and if that is not working, then try clearing the cache of this plugin. Since, it is possible that the new plugin will try to put some new CSS or JS which might get stripped out but not cached in the combined file.
- WP Super Cache– Excellent plugin which generates static HTML files for your blog. Nobody updates their blog every minute. It’s not a Twitter. So, why generate that same page again and again for every user who visits you blog? The solution is to cache the generate page. Later when any user visits your blog then that user will be served the cached page. This saves a ton of overhead. Particularly when you are using Nginx, since, we run PHP and Nginx processes separately. So we can configure Nginx to serve the generated file, if present, and completely bypass PHP. This plugin is smart enough to refresh the cache when you make a new post or update it.
Now its time for the configs.
nginx.conf
[code lang=”cink”]
user apache apache; #The uid and gid of the nginx process
worker_processes 4; #Number of worker processes that needs to be created.
error_log /var/log/error-n.log;
pid /usr/local/nginx/logs/nginx.pid;
events {
worker_connections 1000;
}
http {
include mime.types; #Includes a config file which is available with ngix’s default installation.
index index.html index.htm index.php index.shtml;
log_format main ‘$remote_addr – $remote_user [$time_local] "$request" ‘
‘$status $body_bytes_sent "$http_referer" ‘
‘"$http_user_agent" "$http_x_forwarded_for"’;
sendfile on;
keepalive_timeout 5;
gzip on;
# Sets the default type to text/html so that gzipped content is served
# as html, instead of raw uninterpreted data.
#default_type text/html;
server {#If someone tries to access the url http://applegrew.com/xxx then
#this will redirect him to http://www.applegrew.com/xxx.
server_name applegrew.com;
rewrite ^ http://www.applegrew.com$request_uri? permanent;
}
server {#The config for www.applegrew.com
server_name www.applegrew.com;
access_log /var/www/applegrew.com/access-n.log main; #Where access log will be written for this domain.
error_log /var/www/applegrew.com/error-n.log;
root /var/www/applegrew.com/html; #The document root for this domain.
location ~ /admin/ { deny all; } #Denies access to some www.applegrew.com/admin/ url
location ~ /private/ { deny all; }
include cacheCommon.conf; #This caches common static files. This config is given later in this post.
include drop.conf; #This config is given later in this post.
include php.conf; #Configures PHP access for this domain. This config is given later in this post.
include err.conf; #Some common custom error messages I show. This config is given later in this post.
}
server {#Config to serve HTTP traffic.
server_name fb.applegrew.com;
access_log /var/www/fb.applegrew.com/access.log main;
error_log /var/www/fb.applegrew.com/error.log;
root /var/www/fb.applegrew.com/html;
include cacheCommon.conf;
include php.conf;
include drop.conf;
include err.conf;
}
server {//Config to serve HTTPS traffic.
listen 443;
server_name fb.applegrew.com;
ssl on;
ssl_certificate /var/ssl/fb.applegrew.com.crt; #See http://goo.gl/mvHo7 to know how to create crt file.
ssl_certificate_key /var/ssl/fb_applegrew_com.key;
access_log /var/www/fb.applegrew.com/access.log main;
error_log /var/www/fb.applegrew.com/error.log;
root /var/www/fb.applegrew.com/html;
include cacheCommon.conf;
include phpssl.conf; #Notice the difference. This is not php.conf. This config will be provided later in this post.
include drop.conf;
include err.conf;
}
server {
server_name blog.applegrew.com;
access_log /var/www/blog.applegrew.com/access-n.log main;
error_log /var/www/blog.applegrew.com/error-n.log;
root /var/www/blog.applegrew.com/html;
#If tgz file mathcing the request already exists then that will be sent, skipping on the fly compression by nginx.
gzip_static on;
location / {
# does the requested file exist exactly as it is? if yes, serve it and stop here
if (-f $request_filename) { break; }
# sets some variables to help test for the existence of a cached copy of the request
set $supercache_file ”;
set $supercache_uri $request_uri;
# IF the request is a post, has a query attached, or a cookie
# then don’t serve the cache (ie: users logged in, or posting comments)
if ($request_method = POST) { set $supercache_uri ”; }
if ($query_string) { set $supercache_uri ”; }
if ($http_cookie ~* "comment_author_|wordpress|wp-postpass_" ) {
set $supercache_uri ”;
}
# if the supercache_uri variable hasn’t been blanked by this point, attempt
# to set the name of the destination to the possible cache file
if ($supercache_uri ~ ^(.+)$) {
set $supercache_file /wp-content/cache/supercache/$http_host/$1index.html;
}
# If a cache file of that name exists, serve it directly
if (-f $document_root$supercache_file) { rewrite ^ $supercache_file break; }
# Otherwise send the request back to index.php for further processing
if (!-e $request_filename) { rewrite . /index.php last; }
#try_files $uri $uri/ /index.php;
}
location ~ /wp-config\.php { deny all; }
location ~ /wp-content/bte-wb/.*\..* { deny all; }
include cacheCommon.conf;
include drop.conf;
include php.conf;
include err.conf;
#Let wordpress show its own error pages.
fastcgi_intercept_errors off;
}
server {
server_name cink.applegrew.com;
access_log /var/www/cink.applegrew.com/access-n.log main;
error_log /var/www/cink.applegrew.com/error-n.log;
root /var/www/cink.applegrew.com/html;
include cacheCommon.conf;
include drop.conf;
include err.conf;
}
server {#If none the above matched then maybe the url was accessed, (say) via the IP directly. We then show applegrew.com.
listen 80 default;
server_name _;
access_log /var/www/applegrew.com/access.log-n main;
server_name_in_redirect off;
rewrite ^ http://www.applegrew.com$request_uri? permanent;
include err.conf;
}
}
[/code]
cacheCommon.conf
[code lang=”cink”]
#Asks browsers to cache files with extension ico, css, gif, jpg, jpeg, png, txt and xml.
location ~* \.(?:ico|css|js|gif|jpe?g|png|txt|xml)$ {
# Some basic cache-control for static files to be sent to the browser
expires max;
add_header Pragma public;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
}
[/code]
drop.conf
[code lang=”cink”]
location = /favicon.ico { access_log off; log_not_found off; } #Don’t log this.
location ~ /\. { access_log off; log_not_found off; deny all; } #Block . (dot) files access
#Don’t log and deny access to files which end with ~, as these are usually backup files.
location ~ ~$ { access_log off; log_not_found off; deny all; }
[/code]
err.conf
[code lang=”cink”]
error_page 500 502 503 504 /50x.html;
error_page 403 404 /404.html; # Yes for 403 too we show 404 error, just to mislead.
location = /50x.html {
root /home/webadmin/err/;
}
location = /404.html {
root /home/webadmin/err/;
}
[/code]
php.conf
[code lang=”cink”]
location ~ \.php { #All requests that end with .php are directed to PHP process.
include phpparams.conf; #This file is described later in this post.
}
[/code]
phpssl.conf
[code lang=”cink”]
location ~ \.php {#This the same as php.conf but adds few ssl specific configs.
fastcgi_param HTTPS on; #This sets $_SERVER[‘HTTPS’] to ‘on’.
fastcgi_param SSL_PROTOCOL $ssl_protocol; #This sets the $_SERVER[‘SSL_PROTOCOL’].
fastcgi_param SSL_CIPHER $ssl_cipher; #This sets the $_SERVER[‘SSL_CIPHER’].
fastcgi_param SSL_SESSION_ID $ssl_session_id; #This sets the $_SERVER[‘SSL_SESSION_ID’].
fastcgi_param SSL_CLIENT_VERIFY $ssl_client_verify; #This sets the $_SERVER[‘SSL_CLIENT_VERIFY’].
include phpparams.conf;
}
[/code]
We need to set the `$_SESSION` ourselves since unlike mod_php (in Apache), Php-Fpm is not embedded in Nginx and it doesn’t have these information available to it unless we set it. The above config doesn’t set all the flags a script might expect, but the usual ones. If need to set some more then go to Nginx HttpSslModule’s Built-in variables section.
phpparams.conf
[code lang=”cink”]
#PHP FastCGI
fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param SERVER_PROTOCOL $server_protocol;
fastcgi_param GATEWAY_INTERFACE CGI/1.1;
fastcgi_param SERVER_SOFTWARE nginx;
fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;
fastcgi_connect_timeout 60;
fastcgi_send_timeout 180;
fastcgi_read_timeout 180;
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
fastcgi_pass unix:/usr/local/nginx/logs/php5-fpm.sock; #I have configured both Php-Fpm and Nginx to communicate via file sockets.
[/code]
/etc/php-fpm.conf
[code]
include=/etc/php-fpm.d/*.conf
pid = /var/run/php-fpm/php-fpm.pid
error_log = /var/log/php-fpm/error.log
log_level = error
[/code]
/etc/php-fpm.d/www.conf
[code]
[www]
listen = /usr/local/nginx/logs/php5-fpm.sock
listen.allowed_clients = 127.0.0.1
user = apache
group = apache
pm = dynamic
pm.max_children = 6; #This can be increased on 512MB RAM. For 256MB you ca use 2.
pm.start_servers = 3; #This can be increased. For 256MB you can use 1.
pm.min_spare_servers = 3; #This can be increased. For 256MB you can use 1.
pm.max_spare_servers = 5; #This can be increased. For 256MB you can use 1.
pm.max_requests = 500
slowlog = /var/log/php-fpm/www-slow.log
php_admin_value[error_log] = /var/log/php-fpm/www-error.log #All PHP errors will go into this.
php_admin_flag[log_errors] = on
[/code]
Note: The settings above are indicative. You need to experiment with different settings on your system. I have a MySql DB too running on the same system. In my case the minimal settings for 256MB RAM too cause problem. The PHP process used to choke after 4-5 days of running. So, finally I was forced to increase server RAM to 512MB.
/etc/init.d/nginxd
I wrote this shell script to start stop nginx as a service on my CentOS server.
[code lang=”bash”]
#!/bin/bash
# chkconfig: 235 85 15
# description: The Nginx Server is an efficient and extensible \
# server implementing the current HTTP standards.
cmd=/usr/local/nginx/sbin/nginx #Change this to match your Nginx installation path.
start() {
pgrep ‘nginx$’ > /dev/null
if (( $? != 0 ))
then
echo ‘Staring nginx’
$cmd
RETVAL=$?
if (( $RETVAL == 0 ))
then
echo ‘Started successfully’
fi
else
echo ‘Nginx already running’
RETVAL=0
fi
}
RETVAL=0
case "$1" in
start)
start
;;
stop)
echo ‘Shutting down Nginx quickly’
$cmd -s stop
RETVAL=$?
;;
quit)
echo ‘Gracefully shutting down Nginx’
$cmd -s quit
RETVAL=$?
;;
restart)
echo ‘Stopping Nginx’
$cmd -s stop
start
;;
reload)
echo ‘Reloading cofig’
$cmd -s reload
RETVAL=$?
;;
reopen)
echo ‘Reopening log files’
$cmd -s reopen
RETVAL=$?
;;
help)
$cmd -?
RETVAL=$?
;;
test)
echo ‘Test config’
$cmd -t
RETVAL=$?
;;
*)
echo $"Usage: nginx {start|stop|quit|restart|reload|reopen|help|test}"
echo "stop – quick shutdown"
echo "quit – graceful shutdown"
echo "reload – close workers, load config, start new workers"
echo "reopen – reopen log files"
echo "test – only tests the config"
RETVAL=3
esac
exit $RETVAL
[/code]
You can install the above by copying the nginxd file to /etc/init.d then run
`sudo /sbin/chkconfig nginxd –add`
`sudo /sbin/chkconfig nginxd on`
You can the give commands to the script by
`sudo /sbin/service nginxd command here`
Well, I hope this post been helpful.
You can download all your data from Facebook.
I don’t know how long its been up, but today I noticed that Facebook allows you to download all your data.
The download archive will have:-
- Any photos or videos you’ve shared on Facebook.
- Your Wall posts, messages and chat conversations.
- Your friends’ names and their email addresses (if they have shared it).
What the archive won’t have are:-
- Your friends’ photos and status updates.
- Other people’s personal info.
- Comments you’ve made on other people’s posts.
To download your own archive goto Account Settings and click on the link “Download a copy”. This is shown with a red box around it in the screen shot below. (Click on the image to get the bigger picture.)
Clicking this link will take you to a page where need to click the “Start Archive” button. Since archiving takes time, so FB will mail you when archiving is complete.
Making Frameworks
Every few months we see a new framework springing up somewhere. Now we have too many choices. In fact it is little too much. Unfortunately, choice is something you don’t have in a typical enterprise. In an enterprise, software developer is coaxed to make use of many frameworks from which he would rather run a mile away. Not a day will go by without him criticizing them. Is something wrong with today’s developers? Why do they always keep criticizing about something which their managements rave about?
Before I continue I should clearly define, what I mean by (software) framework. I consider a framework to be a package (a closed box) of code that helps a developer to instruct the computer electronics to solve some problem. Now this problem could be of many types. You already know that instructing a processor in their native language (machine language) is no simple task. In other word this is a problem. To solve this we have the OS. Yes, OS is a framework too! Think about this. OS provides us with some basic APIs to read file, read from network, allocate memory , display output, taking input, and so many things. Your program doesn’t have to code them. OS provides APIs for them. So, OS too is a framework, which solves some common problems. To interact with Microsoft Windows’ API a VC++ programmer will typically use MS Foundation Class (MFC) library. This is needed since interacting with raw Win32 API in itself is a problem. It is very difficult to use. To ease that VC++ provides MFC framework. So, a typical framework also solves the problems in other frameworks.
No software in this world is perfect. If they were then there would have been no software industry, as service accounts for 70% revenue. They all have problems. This means there is always scope for a framework which solves that. However, the framework itself is a software, that means we need more frameworks to solve that! This has given birth to today’s infinite stack of frameworks. Stack of frameworks is no doubt needed but an infinite stack is always ridiculous. Infinite stack is a condition when developers try to fix problems from the top of the stack instead of going down and fixing the source of it. Unaddressable problems in a framework usually arise from design issues and lack of foresight. Perceived problems in a framework arise from framework user’s lack of understanding of the scope of it. In either case the developer using the stack should remove the problem framework, but usually end up adding a new framework to it. This is because – 1) It is cool to create a framework. 2) It is much easier to code a framework than make management understand why you want to change the stack. Once plagued with this condition the stack will inevitably grow like cancer until the hardware begs for mercy. At which point it will be declared that the stack is too advanced for current hardware.
Broadly classified, frameworks are of two types – thin and thick frameworks. Thin frameworks try to ease out some kinks in some other frameworks. Thick frameworks are the one which promise to do every god damn task you throw at them. In future if technology permits then maybe we will see a thick framework which will not only write the codes for you but also clean your kitchen floors. :p If you are a developer then you must have come across some frameworks like these.
It is interesting how sales people have changed the jargon to market frameworks. They do not use the word framework. They say it is a ‘technology’. Typically thick frameworks are marketed like this. Sales people will typically list out mind numbing number of features and at the end of the presentation the only thing you will remember is that this ‘technology’ is very powerful and hence awesome.
Endless frameworks stacks are no doubt ridiculous, but thick frameworks are evil. You depend on them and when things don’t work out the way you want then you will have run after their creators for help. Remember they are closed box. In industry parlance, they are black box. So, you know hardly anything about how it works. Do you want to take the risk of using something which does 90% of the job without you knowing anything about it? Thick frameworks, even when open source, are still dangerous. It is simply because they are so hard to code and could possibly have loads of bugs. This brings us to an interesting point. The code that we build on top of a framework in itself is a package of frameworks, in fact we can call that a framework too. So, does this mean the bigger the code is the more unreliable it is? Well yes, of course; but we try to minimize that by dividing the code into distinct (almost) independent parts. So, this means when evaluating a thick framework we must always try to identify the independent parts in it. If any such part is too big and complicated to understand then you should not go ahead with it.
A user of a framework must understand how the framework works. A normal user of OS need not know this, but as a developer you must know. This is because a normal user will always stay within the bounds of the foreseen scope, while the developer needs to push the boundary of it. By the time a developer understands that the current stack may not fully meet his requirements, it is already too late to change it. The only way around in this case is hack it! To hack it you need to know it inside out. For example, it is best to avoid a web framework which is so thick that you don’t even have the slightest idea of how it routes the HTTP packets inside it.
People now a days seem to miss the point of creating a framework and ‘technology’ stacks is solving problems, not multiplying them. Never make a framework which has more problems than the number of problems it solves.
Firefox 6 is out! Yeah and it’s not a beta.
Mozilla has gone nuts with their release version numbering. In a span of two – three months we get FF 5 and 6! Too good to be true? Yeah right. They say that it is just a number but these “just number” upgrade breaks extensions. At the time of writing this article Firebug was still broken in FF6.
Below I have pasted some interesting comments on this subject by other vexed FF users. (Src: http://hacks.mozilla.org/2011/08/firefox6/)
- Luis Elizondo wrote on August 16th, 2011at 9:16 am:
Not enough changes for a mayor version release. I don’t want to be using Firefox 3569 next year!
- louisremi wrote on August 17th, 2011 at 12:37 am:
We’ve changed our release cycles: http://hacks.mozilla.org/2011/04/aurora/
Version numbers are just numbers, what matters is that we deliver features faster to you, Firefox users and Web developers. - Jose wrote on August 17th, 2011 at 4:16 am:
Considering that “just numbers” break extensions and make it just plain difficult for admins…
- Alex wrote on August 17th, 2011 at 6:19 am:
Couldn’t agree more, Jose. Another new version…half my extensions no longer work. What’s the point? Using Firefox is no longer ‘fun’.
And, rapid updates is one of the reasons why Chrome is unsupported at work…now Firefox? You’re pretty much pushing companies back to IE. Heck, we’re still on IE8 at work, probably won’t go to IE9 until next year. Whether it’s right or wrong, companies move at a slower pace, because they need to continuously support the internal software that keeps the place going. These rapid changes just mean that Firefox won’t be supported anymore. The extensions we need for day to day work keep breaking every few weeks now. It’s not good.
- Logan wrote on August 17th, 2011 at 6:45 am:
I agree, this new versioning is ridiculous. If they’re just numbers, what’s wrong with “just numbering” them 4.1 and 4.2? Save the whole versions for releases that are actually a big deal.
- austin wrote on August 17th, 2011 at 7:05 am:
i have to agree the version numbering is going too high too fast, and soon you will be at very large numbers that start to get ridiculous (as his “firefox 3569″ alludes to) people can handle small numbers even weird decimals(i say weird because 3.5.26 is not a real decimal but kinda looks like one. its made of a series of small numbers that are easy on the eye)
- Luis Elizondo wrote on August 17th, 2011 at 7:47 am:
This is already ridiculous. This change to the release cycle is one of the stupidest decisions I’ve ever seen in an Open Source Project. What are you trying to achieve Mozilla? Really. You’re breaking extensions every two months or less, you’re making it really hard for developers to keep up to date with your changes, and remember, those developers are working for free, on their free time. Remember the expectation of Firefox 3 and Firefox 4? Millions of downloads in hours, even a Guinness World Record, and now with 5 and 6 you’re just loosing momentum against other browsers, when will you get another ‘Firefox party’ to celebrate the next release of the “Greatest Browser Ever”? When you reach Firefox 1000? Or maybe Firefox 2000? Again, this is stupid. You can still make really fast updates without moving to a major version and making big efforts to not break extensions.
I will still use Firefox because of Firebug, but the moment you break it with one of your “mayor” versions, I’m done with you. There’s no reason to keep using a browser like Firefox when I have other options. This is not year 2000 when we have only two options.
- Jose wrote on August 16th, 2011 at 2:22 pm:
Thank you for breaking my extensions once again. Whats up with the number jumps????
- Luis Elizondo wrote on August 17th, 2011at 7:51 am:
Ohh, don’t worry, developers will fix them just about a week before they launch Firefox 7 and the history will continue.
- Andy M wrote on August 17th, 2011at 6:53 am:
I don’t mind the new features but why does each new version have to break so many plugins and add-ons?
It’s damn annoying to lose functionality that works perfectly well, just because a few new features have been added.
- raj wrote on August 17th, 2011 at 8:32 am:
version no’s can be 1 thru 100000. by the time we reach 1000 the product itself will become obsolete. we dont have netscape anymore right. same way. of couse firefox is the new avatar of netscape. so by the time firefox reaches 1000 it will be rechristined firebox LOL
- Joe Luhman wrote on August 17th, 2011 at 10:07 am:
Thank you for breaking my extensions for the second time in as many months. How long is this insanity going to continue? Please stop breaking the extensions for every major release, or please re-think this silly move to a six week ‘major’ release cycle.
I just hope, Mozilla come to senses before the users run out of patience.