Monitoring Your Servers With Nagios Using NRPE and ELK Stack

In this blog, let’s look at the power of three tools—Elasticsearch, Logstash, and Kibana (together known as ELK) in collecting, analyzing, and visualizing all types of structured and unstructured data. You will see the advantages of these tools, and by the end of the article, you will learn how to integrate Nagios Remote Plugin Executor (NRPE) with ELK in order to monitor various system-level metrics. Clogeny has critical expertise in the ELK stack and set it up for several of our customers.

ELK Stack: An Introduction

The ELK stack consists of Elasticsearch, Logstash, and Kibana. These are highly popular open-source tools to gather real-time analytics and actionable insights from the data residing on your storage clusters and log files.

Elasticsearch: Elasticsearch is a distributed, restful search and analytics platform, which provides real-time analysis of data, high availability, and multitenancy among other benefits.

Logstash: Logstash is a tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use (like for searching).

Logstash is used to store your entire logs and infrastructure in one location—along with search and graphing capabilities. It lets you easily parse text logs, so you can query precise items, such as 404 HTTP errors, Nagios critical alerts, and mail server issues, without finding keywords in the wrong places.

Kibana: Kibana is an Apache-licensed, open-source analytics and search dashboard that works on a browser interface. It is the default single-pane-of-glass dashboard for Elasticsearch. You can set up and start using Kibana in a jiffy. As it is built entirely on HTML and JavaScript, you need to use a plain web server to run it. It’s just as easy to start using Kibana as Elasticsearch.

ELK Stack Advantages

Real-Time Data and Real-Time Analytics: The ELK stack gives you the power of real-time data insights, with the ability to perform super-fast data extractions from virtually all structured and unstructured data sources. Elasticsearch is both fast and powerful.

Scalability, High Availability, and Multitenancy: Elasticsearch lets you start small and expand as your business grows. It is built to scale horizontally out of the box. As you need more capacity, simply add another node and let the cluster reorganize itself to accommodate and exploit the extra hardware. Elasticsearch clusters automatically detect and rectify node failures. You can set up multiple indices and query each of them independently or together.

Full-Text Search: Under the cover, Elasticsearch uses Apache Lucene, a high-performance, open-source text-search engine, to provide the most powerful full-text search capabilities in any open-source product. The search supports multiple languages, an extensive query language, geolocation, context-specific suggestions, and auto-completion.
Document Orientation: Elasticsearch lets you store complex, real-world entities as structured JSON documents, where all fields come with a default index. You can use these indices in a query to get accurate results at the blink of an eye.

Nagios With NRPE

Nagios Remote Plugin Executor (NRPE) allows you to monitor machine-level metrics like disk usage, memory usage, CPU load, processes, etc. Also, it allows you to remotely execute Nagios plugins on Linux and Unix machines. NRPE can also communicate with some Windows agent add-ons, so you can execute scripts and check metrics on remote Windows machines as well.

NRPE

How to Install and Configure NRPE?

For Debian or Ubuntu machines use the following command:

For CentOS machines:

When you finish the installation successfully edit nrpe.cfg file to allow your Nagios server communicating with it.

Now you have to restart the NRPE service to apply the above changes.

Installing NRPE Plugins

For Debian/Ubuntu machines use the following command:

For Cent OS machines:

Test if It Works

On your Nagios server run the following command:

The command output should be the NRPE version like “NRPE v2.13”.

Add Check Commands in NRPE

The services check commands with the nagios plugins packages are by default installed in /usr/lib/nagios/plugins/ for 32/64 bit systems. Default installation adds a few commands in the configuration file. Add more commands as per your requirements as shown below:

The default configuration file for NRPE is “/etc/nagios/nrpe.cfg”. After updating the configuration, user needs to restart NRPE using the command provided above.

How to Configure ELK for NRPE?

In ELB stack, the Logstash tool is used for receiving, processing, and outputting all kinds of logs, such as system logs, web server logs, error logs, application logs, and just about anything you want to throw at it. Elasticsearch is used as a backend data store and Kibana as a frontend reporting tool.

To provide the monitoring output or data generated by NRPE, we will need to configure Logstash to read, parse, and store it in Elasticsearch so that data can be viewed in Kibana with various reporting options.

Following is a sample configuration for Logstash to read monitoring data of a disk from NRPE and push it to Elasticsearch.

To collect different monitoring metrics, a user has to provide different inputs like above that will be processed by the Logstash agent; the output will be stored in Elasticsearch. In the above example we are calling “check_nrpe” plugin, which will internally call NRPE agent on the linux server to provide the monitoring data for the command specified. As in the above command, we have specified the command “check_hda1”, which will return the monitoring data for the disk mounted at hda1 device on the Linux server.

Kibana Analytics Dashboard

To view monitoring data stored in Elasticsearch through Kibana you will have to use proper filters and queries to analyze the data better.

Kibana dashboard

Conclusion

Using the NRPE and ELK stack, you can create a one-stop dashboard from which you can analyze all logs of the environment, such as application server, web server, database server, etc. You can monitor and analyze the systems in a better way by looking at all servers’ metrics for memory, disk usage, processes, etc., as well as system logs for all the servers. Kibana’s ability to create different types of monitoring filters and queries makes analysis of all systems very easy.

Knife-cloud Gem: Introduction & Knife Plugin Development Using It

Chef Software, Inc. has released knife-cloud gem. This article talks about what is the knife-cloud gem and how you can use it to develop your custom knife-cloud plugin.

Knife is a CLI tool used for communication between local chef-repo and the Chef Server. There are a couple of knife subcommands supported by Chef, e.g., knife bootstrap, knife cookbook, knife node, knife client, knife ssh, etc. Knife plugin is an extension of the knife commands to support additional functionality. There are about 11 knife plugins managed by Chef and a lot more managed by the community.

The concept of knife-cloud came up as we have a growing number of cloud vendors, and therefore a number of knife plugins, to support the cloud specific operations. The knife-cloud plugins use cloud specific APIs to provision a VM and bootstrap it with Chef. These plugins perform a number of common tasks, such as connection to the node using SSH or WinRM and bootstrapping the node with Chef. The knife-cloud (gem) has been designed to integrate the common tasks of all knife cloud plugins. As a developer of a knife cloud plugin, you will not have to worry about writing the generic code in your plugin. More importantly, if there is any bug or change in the generic code of the knife plugin, the fix would be done in knife-cloud itself. Today we need to apply such changes across all the knife plugins that exist.

Knife-cloud is open source available at: https://github.com/opscode/knife-cloud.
You may refer to https://github.com/opscode/knife-cloud#writing-your-custom-plugin about the steps to write your custom knife cloud plugin.

Clogeny Technologies has written a knife-cloud scaffolder (https://github.com/ClogenyTechnologies/knife-cloud-scaffolder) to make your job even simpler. The scaffolder generates the stub code for you with appropriate TODO comments to guide you in writing your cloud specific code.

To use the knife-cloud-scaffolder:
– git clone https://github.com/ClogenyTechnologies/knife-cloud-scaffolder
– Update properties.json
– Run the command: ruby knifecloudgen.rb E.g., ruby knifecloudgen.rb ./knife-myplugin ./properties.json

Your knife-myplugin stub will be ready. Just add your cloud specific code to it and you should be ready to use your custom plugin.

Installing Microsoft SQL Client Libraries Using CFEngine

CFEngine is an IT infrastructure automation framework that helps engineers, system admins, and other stakeholders in an IT organization manage IT infrastructure while ensuring service levels and compliance.

We use CFEngine to solve one of the many problems within the automation for deploying Microsoft SQL Server client utilities. We will take a dive into CFEngine syntax and try to program (well, in configuration-management terminology declare the state of system).

What does it take to install Microsoft SQL Server client libraries using CFEngine?

You need two things to achieve this:
1. Microsoft SQL Server client installers
2. CFEngine understanding – we will learn this as we write the policy.

Microsoft SQL Server 2008 R2 Native Client

Let’s try to install the native client for the 64-bit system. The installer is available here. So we first need to download the installer and use it to install the application. Let’s break it down into smaller tasks to achieve this and figure out how to do the same using CFEngine.
Basically we need to figure out two things here:
1. How to download the installer file from the URL?
2. How to use the downloaded file and invoke the installer.

CFEngine defines a term ”promise” to describe the final state of a resource or part of a system. All such promises are written into a file referred to as ”policy file”. CFEngine has support for a large number of ”promise types” that can help you achieve day-to-day infrastructure tasks such as creating users or files with specific attributes, installing packages, etc.

CFEngine has its own language syntax, known asDSL, that helps you define how to automate the system. All these are well described in the documentation. The things we need to know are variables, bundles (think of these as methods, i.e., group of promises) and classes (think of these as events or conditionals). Then there is ”ordering” that defines the flow of execution, which is mostly the implied ordering though you can have explicitly defined ordering using ”depends_on”.

Well, I feel that I have described the whole CFEngine language in two paragraphs, which are going to be hard to understand unless you read CFEngine docs! But even if you do read them, these paragraphs should help you follow it with a real-life example.

Jumping back on above breakup of tasks, let’s have a look at how to download the installer .msi file from a web URL. The URLs will be different for different versions of SQL Server client and architecture.

Let’s define some variables using classes (as conditions)

The above CFEngine code defines a string variables initialized to values under the condition (using classes) that we are targeting 64-bit x86 systems and trying to install “Microsoft SQL Server 2008 R2 Native Client”. Note here that ‘x86_64’ is one CFEngine class; ‘2008R2’ is another class. You can define and initialize different values for these variables under other conditions, say x86.2008R2.native_client:: for 32-bit x86 systems.

So the next question is how do we define these classes?

The above CFEngine code defines a string variables initialized to values under the condition (using classes) that we are targeting 64-bit x86 systems and trying to install “Microsoft SQL Server 2008 R2 Native Client”. Note here that ‘x86_64’ is one CFEngine class; ‘2008R2’ is another class. You can define and initialize different values for these variables under other conditions, say x86.2008R2.native_client:: for 32-bit x86 systems.

So the next question is how do we define these classes?

Before we get into defining our classes lets write a definition for bundle (think of it as writing a method) that takes a few input arguments.

I hope this is self explanatory; just note that bundles are a CFEngine way of grouping a set of promises that may or may not take arguments. Logically, bundles can hold variables, classes, or methods in order to define a state of the system in a certain context.

Let’s come back to defining classes required for our solution: classes can be based on system state or bundle arguments.

So this is how we can define classes we require.

We are defining our classes to be named the same as the argument values; for example, parameter architecture can be set as ‘x86_64’, and the argument mssqlversion can be set as ‘2008R2’. These are defined in ‘any’ case, but one can have conditional expression as well. For example, define a soft class, i.e., user defined class, ‘starnix’ if current platform is either Linux or Solaris, where Linux is another class (hard class) already defined by CFEngine.

Download

Now that we have the basics, let’s write a bundle to download the installer from a web URL. Since we are doing this on Windows, we have two options to download the package from the Internet, using WScript or using powershell cmdlets. For using WScript we will have to write a script and trigger it via the CFEngine ‘commands’ promise. But using Powershell, the script will be very short and elegant compared to older WScript style.

Here is how we do the download:

Note above that we are only trying to download if it is not already downloaded–a condition that is set by defining a class ‘already_downloaded’ using a CFEngine function fileexists() in an expression.

The ‘commands’ promise helps trigger a dos/powershell/unix command. The command we use is creating an object of ‘System.Net.WebClient’ class in powershell and calling DownloadFile() method to download the installer from a web URL. Note that we have to escape quotes to keep CFEngine happy and delimit at proper places.
Additionally, if the download was successful, define a new class to indicate the condition that the download was successful. This is achieved using classes => if_repaired(“download_success”)

Another important CFEngine concept used here is the ‘body’, which can help modularize the specification of attributes. We just use it to define the ‘useshell’ attribute; for larger examples see this.

Install

For installation we have to use the installer in silent, non-interactive mode. This can be achieved by using ‘/qn’ flag to “msiexec.exe”.

Here is how we can perform the install.

This should be easy to understand now; just note that we are reusing the ‘body’ concept here in the form of ‘pscontainbody’ that was defined during the download bundle definition previously. The Start-Process cmdlet with ‘–Wait’ option helps in running the installation in a synchronous manner.

Now that we know how to download and install using bundles, we need to invoke these in order within the “ensure” bundle we looked at above, while defining variables. For this we will use the ”methods” promise type.

The methods promises are named ‘fetch’ and ‘install’ and invoke the download_from_url and install_using_msi bundles respectively, passing the variable values. The “install” promise is evaluated only if the download was successful which is flagged using “download_success” CFEngine class.

Given here is the complete source code for installing various SQL Server client utilities.


Five Reasons to Use Docker for Managing Application Delivery

Docker Logo

Packaging an application along with all of its bin/lib files, dependencies and deploying it in complex environments is much more tedious than it sounds. In order to extenuate it, Docker, an open-source platform, enables applications to quickly group their components and eliminates the friction between development, QA, and production environments. Docker is a lightweight packaging solution that can be used instead of a virtual machine. Docker is an open-source engine to create portable, lightweight containers from any application.

details

Docker is hardware- and platform-agnostic, which means a Docker container can run on any supported hardware or operating system. The fact that it takes less than a second to spawn a container from a Docker image justifies that Docker really is lightweight as compared to any other virtualization mechanism. Also the Docker images are less than a tenth the size of their counterpart virtual machine images. The images created by extending a Docker base image can be as tiny as few megabytes. This makes it easier and faster to move your images across different environments.

Docker Hub is the central repository for Docker. Docker Hub stores all the public as well as private images. Private images are only accessible for a given users account or team to which it belongs. Docker Hub can be linked to Github or Bitbucket to trigger auto builds. The result of such a build is ready to deploy the application’s Docker image.

Docker provides mechanism to separate application dependencies, code, configuration, and data by providing features such as container linking, data volumes, and port mapping. Dependencies and configuration is specified in the Dockerfile script. The Dockerfile installs all the dependencies, pulls the application code from the local or remote repository, and builds a ready-to-deploy application image.

Container Linking

Docker container linking mechanism allows communication between containers without exposing the communication ports and details. The below command spawns a Tomcat application container and links it to the mysql-db-container. The Tomcat application can communicate to the mysql-db by using the environment variables (like db:host, db:port, db:password) exposed by mysql-db-container there by providing maximum application security.

docker run –link mysql:mysql-db-container clogeny/tomcat-application

Data Volumes

Docker provides data volumes to store, backup, and separate the application data from the application. Data volumes can be shared between multiple containers and read write policies can be specified for a given data volume. Multiple data volumes can be attached to the container using the flag -v multiple times. Docker also allows mounting a host directory as data volume to a container.
docker run -v /dbdata –name mysql-instance1 my-sql
#this creates dbdata volume inside the mysql-instance1 container
docker run –volumes-from mysql-instance1 –name my-sql-instance2 my-sql-server
#mounts and share all the volumes from mysql-instance1container

Dockerizing a Ruby on Rails Application

Docker packaging

4 Simple steps to Dockerize your ruby-on-rails application

1. Install Docker

2. Create a Dockerfile as below in your application directory.
FROM rails # use the rails image from the Docker Hub central repository
MAINTAINER Clogeny <root@clogeny.com>
ADD ./src ./railsapp #Copies the source files from host to the container. URL to the code repository can also be used
RUN bundle install
EXPOSE 3000 #Expose port 3000 to communicate with the RoR server
ENTRYPOINT rails s # run the RoR server with “rail s” command

3. Build the application image. This command creates a ready-to-run rails image with your rails application deployed.
docker build -t clogeny/my-RoR-app # -t specifies the name of the image which gets created

4. Push the application to central repository so that the QA can use it to test the application. The image can be used to speed up and revolutionize the CI/CD workflow.
docker push clogeny/my-RoR-app # Upload the Docker image to the central repo

Deploying the Dockerized Application

Deployment requires executing just one command to get the application up and running on the test machine. Assuming Docker is installed on the host, all we need to do is execute the “docker run” command to spawn a Docker container.

docker run # Spawn a docker container
-t # -t flag is used to show the stdOut and stdErr on the commandLine
-p 3000:3010 # -p flag is used to map container port 3000 to the host port 3010
clogeny/my-RoR-app # Use the “my-RoR-app” image earlier uploaded to the repo.

And here we are, the Docker container is up and running in a matter of a few seconds. We can log into the application using the URL http://localhost:3010

Clogeny to co-host Chef: The Path to Full Automation in Bengaluru


clogeny1 Chef Logo

Clogeny and Chef will be co-hosting this year’s Chef Day on 12-March, 2014 in Bengaluru, India. This event focuses on how Dev and Ops can change the way they work to provide a more harmonious and productive way to work and increase the effectiveness of an IT oriented organization. The session covers challenges and complexity involved in managing complex infrastructures, fundamentals of infrastructure automation and how Chef with the combination of configuration management and service oriented architecture make it easy to create elegant fully automated infrastructure.

The list of speakers are prominent contributors & core members of Chef including:

  • Arne Gallagher: Arne is the Global Partner Manager at Chef. His session will provide an Overview of Chef and the Philosophy behind Chef – the open source systems integration framework.
  • Michael Ducy: Michael is an Enterprise Architect at Chef. His session covers the current challenges both large and small organizations face while managing their IT infrastructure in the cloud as well as their private datacenters and how complex infrastructure can be managed easily through automation.
  • Chirag Jog: Chirag is CTO - Clogeny and is also one of the major contributors to Chef. Chirag has contributed to Chef’s Knife plugins, cookbooks and recipes for managing complex infrastructure.
  • Kalpak Shah: Kalpak is CEO – Clogeny and has deep expertise in architecting Continuous Delivery strategies for enterprises with multi-million dollar revenue and Fortune 500 companies.

After the sessions we will have a panel discussion focusing on enterprise adoption of DevOps. During these discussions, the panelists would field questions from the audience on best practices for DevOps and ways to increase effectiveness of an IT oriented organization.

Join us for an evening of discussions centered around DevOps. These sessions have a history of great discussions with elite speakers & core members from the Chef community. You will have an opportunity to interact with prominent members and discuss potential implementations with Chef with your peers.

You can register for this event using the following link:

http://www.eventbrite.com/e/chef-the-path-to-full-automation-tickets-10829442153

In case of any queries, you could also reach us on +91 9689 939365

Hurry, The seats are limited and are getting filled fast !!

Best Practices for a Mature Continuous Delivery Pipeline

Continuous Integration & Delivery Pipeline, Continuous Delivery Pipeline, Devops, Orchestration, Automation, Infrastructure Provisioning, Continuous Integration Continuous Delivery, Continuous Deployment, agile, git, Chef, Opschef, Monitoring, capistrano, controltier, rundeck, puppet, cfengine, crowbar, jenkins, release management, Gerrit, Source Code Mangement, Maven, Bundler, Code Quality Analysis, travis, bramboo, ideal continuous delivery process

Continuous Integration(CI) is a software engineering practice which evolved to support extreme and agile programming methodologies. CI consists of best practices consisting of build automation, continuous testing and code quality analysis. The desired result is that software in mainline can be rapidly built and deployed to production at any point. Continuous Delivery(CD) goes further and automates the deployment of software in QA, pre-production and production environments. Continuous Delivery enables organizations to make predictable releases reducing risk and, automation across the pipeline enables reduction of release cycles. CD is no longer an option if you run geographically distributed agile teams.

Clogeny has designed and deployed continuous integration and delivery pipelines for start-ups and large organizations leading to benefits like:

  • Automate entire pipeline – reduce manual effort and accelerate release cycles
  • Improve release quality – reduce rollbacks and defects
  • Increased visibility leading to accountability and process improvements
  • Cross-team visibility and openness – increased collaboration between development, QA, support and operations teams
  • Reduction in costs for deployment and support

A mature continuous delivery pipeline consists of the following steps and principles:

Maintain a single code repository for the product or organization Revision control for the project source code is absolutely mandatory. All the dependencies and artifacts required for the project should be in this repository. Avoid branches per developer to foster shared ownership and reduce integration defects. Git is a popular distributed version control system that we recommend.

Automated builds Leverage popular build tools like ANT, make, maven, etc to standardize the build process. A single command should be capable of building your entire system including the binaries and distribution media (RPM, tarballs, MSI files, ISOs). Builds should be fast – larger builds can be broken into smaller jobs and run in parallel.

Automated testing for each commit An automated process where each commit is built and tested is necessary to ensure a stable baseline. A continuous integration server can monitor the version control system and automatically run the builds and tests. Ideally, you should hook up the continuous integration server with Gerrit or ReviewBoard to report the results to reviewers.

Static Code Analysis  Many teams ignore code quality until it is too late and accumulate heavy technical debt. All continuous integration servers have plugins that enable integration of static code analysis within your CD pipeline or one can also automate this using custom scripts. You should fail builds that do not pass agreed upon code quality criteria.

Frequent commits into baseline Developers should commit their changes frequently into the baseline. This allows fast feedback from the automated system and there are fewer conflicts and bugs during merges. With automated testing of each commit, developers will know the real-time state of their code.

Integration testing in environment that are production clones Testing should be done in an environment that is as close to production as possible. The operating system versions, patches, libraries and dependencies should be the same on the test servers as on the production servers. Configuration management tools like Chef, Puppet, Ansible should be used to automate and standardize the setup of environments.

Well-defined promotion process and managing release artifacts  Create and document a promotion process for your builds and releases. This involves defining when a build is ready for QA or pre-production testing. Or which build should be given to the support team. Having a well-defined process setup in your continuous integration servers improves agility within disparate or geographically distributed teams. Most continuous integration servers have features that allow you to setup promotion processes. Large teams tend to have hundreds or thousands of release artifacts across versions, custom builds for specific clients, RC releases, etc. A tool like Nexus or Artifactory can be used to efficiently and predictably store and manage release artifacts.

Deployment Automation An effective CI/CD pipeline is one that is fully automated. Automating deployments is critical to reduce wastage of time and avoid possibility of human errors during deployment. Teams should implement scripts to deploy builds and verify using automated tests that the build is stable. This way not only the code but the deployment mechanisms also get tested regularly.

It is also possible to setup continuous deployment which includes automated deployments into production environments along with necessary checks and balances.

Configuration management for deployments Software stacks have become complicated over the years and deployments more so. Customers commonly use virtualized environments, cloud and multiple datacenters. It is imperative to use configuration management tools like Chef, Puppet or custom scripts to ensure that you can stand up environments predictably for dev, QA, pre-prod and production. These tools will also enable you to setup and manage multi-datacenter or hybrid environments for your products.

Build status and test results should be published across the team Developers should be automatically notified when a build breaks so it can be fixed immediately. It should be possible to see whose changes broke the build or test cases. This feedback can be positively used by developers and QA to improve processes.

Every CxO and engineering leader is looking to increase the ROI and predictability of their engineering teams. It is proven that these DevOps and Continuous Delivery(CD) practices lead to faster release cycles, better code quality, reduced engineering costs and enhanced collaboration between teams.

Learn more about Clogeny’s skills and expertise in DevOps/CI/Automation here. Learn more about our exciting case studies here.

Get in touch with us for a free consulting session to embark on your journey to a mature continuous delivery pipeline – email us at abhijit@clogeny.com

Chef Knife plugin for Windows Azure (IAAS)

Chef is an open-source systems management and cloud infrastructure automation framework created by Opscode. It helps in managing your IT infrastructure and applications as code. It gives you a way to automate your infrastructure and processes.

Knife is a CLI to create, update, search and delete the entities or manage actions on entities in your infrastructure like node (hosts), cloud resources, metadata (roles, environments) and code for infrastructure (recipes, cookbooks), etc. A Knife plug-in is a set of one (or more) subcommands that can be added to Knife to support additional functionality that is not built-in to the base set of Knife subcommands.

The knife azure is a knife plugin which helps you automate virtual machine provisioning in Windows Azure and bootstrapping it. This article talks about using Chef and knife-azure plugin to provision Windows/Linux virtual machines in Windows Azure and bootstrapping the virtual machine.

Understanding Windows Azure (IaaS):

A complete deployment for a virtual machine in Azure looks as below.

Windows Azure IaaS deployment model

Windows Azure IaaS deployment model

To deploy a Virtual Machine in a region (or service location) in Azure, all the components shown described above have to be created;

  • A Virtual Machine is associated with a DNS (or cloud service).
  • Multiple Virtual Machines can be associated with a single DNS with load-balancing enabled on certain ports (eg. 80, 443 etc).
  • A Virtual Machine has a storage account associated with it which storages OS and Data disks
  • A X509 certificate is required for password-less SSH authentication on Linux VMs and HTTPS-based WinRM authentication for Windows VMs.
  • A service location is a geographic region in which to create the VMs, Storage accounts etc

The Storage Account

The storage account holds all the disks (OS as well as data). It is recommended that you create a storage account in a region and use it for the VMs in that region.
If you provide the option –azure-storage-account, knife-azure plugin creates a new storage account with that name if it doesnt already exist. It uses this storage account to create your VM.
If you do not specify the option, then the plugin checks for an existing storage account in the service location you have mentioned (using option –service-location). If no storage account exists in your location, then it creates a new storage with name prefixed with the azure-dns-name and suffixed with a 10 char random string.

AZURE VIRTUAL MACHINE

This is also called as Role(specified using option –azure-vm-name). If you do not specify the VM name, the default VM name is taken from the DNS name( specified using option –azure-dns-name). The VM name should be unique within a deployment.
An Azure VM is analogous to the Amazon EC2 instance. Like an instance in Amazon is created from an AMI, you can create an Azure VM from the stock images provided by Azure. You can also create your own images and save them against your subscription.

Azure DNS

This is also called as Hosted Service or Cloud Service. It is a container for your application deployments in Azure( specified using option –azure-dns-name). A cloud service is created for each azure deployment. You can have multiple VMs(Roles) within a deployment with certain ports configured as load-balanced.

OS Disk

A disk is a VHD that you can boot and mount as a running version of an operating system. After an image is provisioned, it becomes a disk. A disk is always created when you use an image to create a virtual machine. Any VHD that is attached to virtualized hardware and that is running as part of a service is a disk. An existing OS Disk can be used (specified using option –azure-os-disk-name ) to create a VM as well.

Certificates

For SSH login without password, an X509 Certificate needs to be uploaded to the Azure DNS/Hosted service. As an end user, simply specify your private RSA key using –identity-file option and the knife plugin takes care of generating a X509 certificate. The virtual machine which is spawned then contains the required SSH thumbprint.

Install knife-azure plugin

You can either install via rubygems or build it from the latest source code.

Gem Install

Run the command

Install from Source Code

To get the latest changes in the knife azure plugin, download the source code, build and install the plugin:
1. Uninstall any existing versions

2. Clone the git repo and build the code

3. Install the gem

4. Verify your installation

To provision a VM in Windows Azure and bootstrap using knife,  Firstly, create a new windows azure account: http://manage.windowsazure.com and secondly, download the publish settings file from https://windows.azure.com/download/publishprofile.aspx?wa=wsignin1.0
The publish settings file contains certificates used to sign all the HTTP requests (REST APIs).

Azure supports two modes to create virtual machines – quick create and advanced.

Azure VM Quick Create

You can create a server with minimal configuration. On the Azure Management Portal, this corresponds to the “Quick Create – Virtual Machine” workflow. The corresponding sample command for quick create for a small Windows instance is:

Azure VM Advanced Create

You can set various other options in the advanced create including service location or region, storage-account, VM name etc. The corresponding command to create a Linux instance with advanced options is:

To create a VM and connect it to an existing DNS/service, you can use a command as below:

List available Images:

List currently available Virtual Machines:

Delete and Clean up a Virtual Machine:

This post is meant to explain the basics and usage for knife-azure.

FIND MORE ABOUT CLOGENY:

Learn mode about Clogeny’s offerings in devops and automation as well as success stories .

Writing a Chef Ohai plugin for the Windows Azure IaaS cloud

Chef is an open-source systems management and cloud infrastructure automation framework created by Opscode. It helps in managing your IT infrastructure and applications as code. It gives you a way to automate your infrastructure and processes.

Knife is a CLI to create, update, search and delete the entities or manage actions on entities in your infrastructure like node (hosts), cloud resources, metadata (roles, environments) and code for infrastructure (recipes, cookbooks), etc. A Knife plug-in is a set of one (or more) subcommands that can be added to Knife to support additional functionality that is not built-in to the base set of Knife subcommands.

Ohai, Ohai plugins and the hints system:
Ohai is a tool that is used to detect certain properties about a node’s environment and provide them to the chef-client during every Chef run. Continue reading

Devise – a fully featured authentication mechanism for Rails Applications

While developing any Ruby-on-Rails based web application, programmers often spend significant amount of time developing the authentication modules from scratch – sign up process, login and logout modules, forgot password, password reset and many such functionalities.

What is the solution ?
Well, there are a lot of gems and plugins that provide some of these functionalities that can reduce our work. Although it helps us writing less code , maintaining multiple gems is a bit cumbersome, and so comes Devise into the picture.

What is Devise ?
Devise is an full-featured authentication mechanism for Rails applications. Its easy and quick to integrate, widely used and properly tested. Its defaults are pretty good. Continue reading