Wednesday, September 19, 2012

ci-go-nfo v0.0.1 : console util for ThoughtWorks' Go CI Server



ci-go-nfo v0.0.1



Just a rubygem console utility to get focussed INFO about your Go Continuous Integration pipeline easily, no more switching again to browsers.

@RubyGems: https://rubygems.org/gems/ci-go-nfo

@GitHubhttps://github.com/abhishekkr/ci-go-nfo


Installation 

$ gem install ci-go-nfo



Usage Ci-Go-Nfo ver.0.0.1 

to set-up credential config for your go-ci
$ ci-go-nfo setup
it asks for
(a.) the location where you want to store your configuration file
(b.) the URL for your Go Server like http://my.go.server:8153
(c.) then username and password (create a read-only a/c for it)



to show go-ci info of all runs
$ ci-go-nfo

to show go-ci info of failed runs
$ ci-go-nfo fail

to show go-ci info of passed runs
$ ci-go-nfo pass

_____

.....more to come


output example:

 $ ci-go-nfo setup
 Store sensitive Go Configs in file {current file: /home/myuser/.go.abril}:

 Enter Base URL of Go Server {like http://:8153}:
                                                           http://my.go.server:8153


 This is better to be ReadOnly account details...

 Enter Log-in UserName: go_user

 Password: restrictedpassword


 $ ci-go-nfo pass
  my_pipeline -> specs -> specs
  Success  for run#2 at 2012-09-19T04:24:38
  details at http://my.go.server:8153/go/tab/build/detail/my_pipeline/10/specs/2/specs

  my_pipeline -> package ->gemify
  Success  for run#1 at 2012-09-19T07:04:39
  details at http://my.go.server:8153/go/tab/build/detail/my_pipeline/10/package/1/gemify

 $ ci-go-nfo fail
  your_pipeline -> smoke -> cukes
  Failure  for run#5 at 2012-09-19T04:24:38
  details at http://my.go.server:8153/go/tab/build/detail/your_pipeline/7/smoke/5/cukes

 $ ci-go-nfo
  my_pipeline -> specs -> specs
  Success  for run#2 at 2012-09-19T04:24:38
  details at http://my.go.server:8153/go/tab/build/detail/my_pipeline/10/specs/2/specs

  my_pipeline -> package ->gemify
  Success  for run#1 at 2012-09-19T07:04:39
  details at http://my.go.server:8153/go/tab/build/detail/my_pipeline/10/package/1/gemify

  your_pipeline -> smoke -> cukes   Failure  for run#5 at 2012-09-19T04:24:38   details at http://my.go.server:8153/go/tab/build/detail/your_pipeline/7/smoke/5/cukes

Sunday, August 5, 2012

Puppet ~ a beginners concept guide (Part 3) ~ Modules much more


you might prefer first reading guide Part#1(intro to puppet), & Part#2(intro to modules)
the section after this Part#4(Where is my Data?) discussing how to handle configuration data

Puppet
beginners concept guide (Part 3)

Modules with More

here some time on the practices to prefer while writing most of your modules

[] HowTo Write Good Puppet Modules  
(so everyone could use it and you could use it everywhere)

  • platform-agnostic
    With change in Operating System distro; module also might require difference in package names, configuration file locations, device port names, system commands and more.
    Obviously, it's not expected to test each and every module against each and every distro and get it full-proof for community usage. But what's expected is to use case $operatingsystem{...} statements for whatever distros you can and let the users get notified in case they gotta add something for their distro by fail(""), and might also contribute back..... like following

    case $operatingsystem {
      centos, redhat: {
        $libxml2_development = 'libxml2-devel'
      }
      ubntu, debian: {
        $libxml2_development = 'libxml2-dev'
      }
      default: {
        fail("Unrecognized libxml2 development header package name for your O.S. $operatingsystem")
      }
    }

    ~
  • untangled puppet strings
    You are writing puppet modules. Good chance is you have a client or personal environment to manage for which you had a go at it.
    That means there gonna be your environment specific some client or personal code &/or configuration that is 'for your eyes only'. This will prohibit you from placing any of your module in Community.
    It's wrong on two main fronts. First, you'll end up using so much from OpenSource and give back nothing. Second, your modules will miss on the community update/comment action.
    So, untangle all your modules into atomic service level modules. Further modularize those modules into service puppet-ization requirement. That will be like sub-modules for install, configure, service and whatever more you can extract out. Now these sub-modules can be clubbed together to and we can move bottom-up gradually.
    Now you can just keep your private service modules to yourself, go ahead and use the community trusted and available modules for whatever you can..... try  making minor updates to those and also contribute these updates. Write the others that you don't find out in the wild and contribute those too for community to use, update and improve.
    ~
  • no data in c~o~d~e
    Now when you are delivering 'configuration as a code', adapt the good coding practices applicable in this domain. One of those is keeping data separate than the code, as in no db-name, db-user-name, db-password, etc. details stored directly in the module's manifest intending to create the db-config file.
    There will be a detailed section later over different external data usage involving separate parameter manifest setting up values when included, extlookup loading values from CSVs, puppetDB, hiera data-store and custom facts file to load up key-values.
    ~
  • puppet-lint
    To keep the modules adhere to dsl-syntactic correct and beautiful code writing practice. So the DSL and the community contributors, both find it easy to understand your manifests. It's suggested to have it added to rake default of your project to check all the manifests, ran before every repo-check-in.
    ~
  • do-undo-redo
    It's suggested to have a undo-manifest ready for all the changes made by a module. It mainly comes in handy for infrastructures/situations where creating and destroying a node is not under your administrative tasks or consumes hell lot of time.
    Obviously, in case getting new node is easier..... that's the way to go instead of wasting time in undo-ing all the changes (and also relying on that).
    Those are just there for the dry-days when there is no 'cloud'.
    ~



[] More about Modules  (moreover.....)
Where to get new:http://forge.puppetlabs.com/ is the community-popular home for most of the Puppet Modules.
Where to contribute:
Can manage your public module at GitHub or similar online free repository like 
puppetlabs-kvm.
Then you can push your module to forge.puppetlabs.com.


Saturday, July 28, 2012

DevOps AND 12FactorApp ~ some obsolete & much valid

Why?
Few months ago I came across The Twelve-Factor-App preaching the best practices for building and delivering software. Nothing really new, but a good central place with many good practices for people to refer and compare. Recently I saw some implementation of it in an environment where the basic concerns were already handled and thus the solution implemented was redundant and extra cost. To some level also low-grade.


What?

Actually what 12FactorApp is... it is a good set of ideas around basic set of concerns. The concerns are right, the solutions suggested are situational and the situation is the default/basic setup. With the teams following good DevOps-y practices, they don't turn out to be exactly same.

So to avoid the confusions for more people and foremost saving me the pain of explaining myself at different places in different times for my views against 12FactorApp..... here is what the concerns are and what the solutions turn into when following a proper DevOps-y approach.




What @12FactorApp doesn't suit at all for DevOps-y Solutions

  1. ~
  2. Dependencies
    [+] Obsolete: 'If the app needs to shell out to a system tool, that tool should be vendored into the app.'
    Changed-to: Make your automation configuration management system handle it.
  3. Configurations.
    [+] Obsolete: The twelve-factor app stores config in environment variables changing between deploys w/o changing any code.
    Changed-to: This is not a fine scalable with disaster management based solution. Now configuration management handles the node level deterministic state. The non-developer box level configuration is no more in code.
    [+] Obsolete: The twelve-factor app stores config in environment variables changing between deploys w/o changing any code.
    Changed-to: N
    ow configuration management handles the node level deterministic state. In such a case keeping configurations in a file is much more verifiable, cleaner and broadly available solution. So, there will be no more noise of different environment level configurations in the same configuration file.
  4. ~
  5. Build, Release, Run
    [+] Obsolete: The resulting release contains both the build and config.
    Changed-to: Packaging configuration along-with build makes it dependent of a set environment. Any disaster resistant or scalable architecture would be crippled with it as it requires creating new packages every change. Make your automated configuration management solution intelligent enough to infer required configuration and deploy the build.
  6. ~
  7. ~
  8. Concurrency
    [+] Obsolete: Twelve-factor app processes should never daemonize or write PIDfiles.
    Changed-to: PID files help some automated configuration management solutions to easily identify the 'service' check placed in them. There are operating system level process managers also supporting PIDfiles. Having a pidfile eases up lots of other custom monitoring plug-ins too... and is not a bad practice to have.
  9. ~
  10. ~
  11. ~
  12. ~


Cumulative Correct Concerns 3C@12FactorApp and DevOps-y Solutions

Overall aiming to achieve a easy-to-setup, clean-to-configure, quick-to-scale and smooth-to-update software development ambiance.
The 12 Concerns+Solutions:
  1. Problem: Maintaining Application Source Code
    Solution:
    a.
    Using Version Control Mechanism, if possible Distributed VCS like git. Private hosted (at least private account) code repository.
    b. Unique application~to~repository mapping i.e. single application or independent library's source code in a single repository.
    c. For different versions of same application depend on different commit-stages (not even branches in general cases) of the same code repository.
  2. Problem: Managing Application Dependencies
    Solution:
    a.
     Never manually source compile any dependent library or application. Always depend on the standard PackageManager for the intended platform (like rpm, pkg, gem, egg, npm). If there are no packages available, create one. It's not really difficult. On a standard practice, I'd suggest to utilize something like FPM (may be even fpm-cookery gem if you like), which would give you elasticity of easily changing your platform without worrying for the re-creation of packages. Even creating rpm, gem and other is not too much pain compared to the stability it would bring to infrastructure set-up.
    b. Make your automated configuration management utility ensure all the required dependencies of your application are pre-installed in correct order of correct version with correct required configurations.
    c. The dependency configuration will be specific enough to ensure the usage of the installed & configured dependencies. So in case of compiling binary, use static library linking. If you are loading external libraries, ensure the fixated path. Same configuration management tool can be run even in solo/masterless (no-server) usage mode.
  3. Problem: Configuration in Code, Configuration at all Deploy
    Solution:
    a.
     Ideally no configuration details as in node's IP/Name, credentials, etc. shall not be a part of application's codebase. As if such a configuration file is locally available in developer-box repository, in non-alert & non-gitignore days it might get committed to your repository.
    b. Make your automated configuration management tool generate all these configuration files for a node based on the node-specific details provided to configuration management tool, no the application.
    c. Suggested practice for keeping these configurations with configuration management tool, also require to utilize a proper different data-store from normal configuration statements. Could be in CSVs, Hiera, dedicated parameter's manifest for a tool like Puppet. For a tool like OpsCode's Chef, there is already databag facility available. Wherever available and required the info should be encrypted with a non-repository available secret key.
  4. Problem: Backing Services
    Solution:
    a.
     Whatever other application services are required by application to serve can be included in the 'Backing Services' list. It will be services like data-stores (databases, memory cache and more activesupport), smtp services, etc.
    b. Every information required for these backing services should be configuration details like node-name, credentials, port#, etc. and maintained as a loaded configuration file via configuration management tool.
    c. If it's a highly complex applications broken into several component applications supporting each other, then all other component applications for any component application are also 'Backing Services'.
  5. Problem: Build, Release, Run
    Solution:
    a.
    The development stage code gets pushed to codebase and after passing intended tests should be pushed to Build Stage for preparing deploy-able (compile, include dependencies) code. Should read Continuous Integration process for the better approach at it.
    b. The deploy-able code is packaged ready to deliver in Release Stage and pushed in to the package-manager repositories. The required configuration for execution environment is provided to automated configuration management solution.
    c. Run Stage involves release application package from package-manager and intended system-level configurations via configuration management solution.
  6. Problem: Processes
    Solution:
    a.
    No persistent data related to application shall be kept along-with it. All of user-input & calculated information required for service shall be at the 'Backing Services' available to all the instances of the application of that environment. Helping the application to be stateless.
    b. Get the static assets compiled at 'Build Stage', served via CDN and cached at load balancing server.
    c. Session state data is a good candidate to be stored and retrieved using backing memory powered cache service (like memcache or redis) providing full-blown stateless servers where lossing/killing one and bringing another doesn't impact on user experience.
  7. Problem: Port Binding
    Solution:
    a.
     Applications shouldn't allow any run-time injection to get utilized by 'Backing Services' but instead expose their interaction over any RESTful (or similar) protocol.
    b. In a standard setup, the data/information store/provider opens up a socket and the retriever contacts at the socket with required data transaction protocol. Now this data/information provider can be 'Backing Service' (like db service) or could be the primary application providing information over to a 'Backing Service' (like application server, load balancer).
    c. Either way, they get configured with primary application via automated configuration management by url, port and any other service specific required detail being provided.
  8. Problem: Concurrency
    Solution:
    a.
     Here concurrency is mainly used for highly scalable (on requirement) process model, which is almost equivalent to how used libraries manage internal concurrent processes.
    b. All application & 'Backing Service' processes should be such managed that process count of one doesn't effect another as in say access via load balancer over multiple http processes.
    c. All the processes have a process-type and process-count. There should be a process manager to handle continuous execution of that process with that count. Now it could be ruby rack server to be run with multiple threads on same server, or multiple nodes with nginx serving indecent amount of users via a load balancer.
  9. Problem: Disposability
    Solution:
    a. 
    Quick code & configuration deployment. Configuration Management solution makes sure of the latest (or required stage) application code & configuration changes cleanly & quickly replace the old application exactly as desired.
    b. Application (and 'Backing Services') Architecture shall be elastic, spawning up new nodes under a load-balancer and destroying when the process-load is less must be smooth.
    c. Application's data transactions & task list should be crash-proof. The data & tasks shall be managed to handle the re-schedule of those processes in case of application crash.
  10. Problem: Dev/Prod Parity
    Solution:
    a.
     Keep dev, staging, ..., and production environments as similar as possible.If not in process count and machine nodes count, but necessarily similar on the deployment tasks. Could utilize 'vagrant' in coordination with configuration management solution to get quick production like environments at any development box.
    b. Code manages the application and configuration both, any developer (with considerable system level expertise) could get a hang of configuration management frameworks and manage them. Using 'Backing Services' as mentioned would enable environment based different service providers.
    c. Adapting Continuous Delivery would also ensure no new change in code or configuration breaks the deployment.
  11. Problem: Logs
    Solution:
    a. All staging/production environment will have application and 'Backing Services' promoting its logs to a central (like syslog, syslog-ng, logstash, etc) log hub for archival, if required back-up proof. It can be queried here for analyzing trnds in application performance over past time.
    b.
    The central log solution is not configured within applications but the log solution takes care of what to pick and collect, can even have a look at log routers (fluentd, logplexrsyslog).
    c. Specific log trends can be put to alert everyone effected whenever captured again at Central Log Services (like graylog2, splunk, etc).
  12. Problem: Admin Processes
    Solution:
    a.
     Application level admin processes (like db-migration, specific-case-tasks, debug console, etc.) shall also pick the same code and configuration as the running instances of application.
    b. The admin tasks script related to application shall also ship with application code and evolve with it. As db management rake tasks in RubyOnRails applications, run using 'bundler' to stay pick required Environment related library versions.
    c. Languages with REPL shell (like python) or providing it via separate utility (like 'rails console' for rails) give an advantage to easily debug an environment specific issue (which might me arising due to library versions of that environment, data in-consistency, etc) by directly playing around with the objects seemingly acting as the source of flaw.



As I Always Say

Every Generic Solution is very Specifically Placed.



Tuesday, July 17, 2012

how to host your Public YUM (or any) Repo

almost an year ago came up the simple idea of getting a really simple static-content (html,css,js,...) website on public portal hosted by Google AppEngine for free upto a daily revived usage scheme: http://gae-flat-web.appspot.com/

few days back I was just playing around custom yum repos and thought why not get up one of my own for public usage with RPMs served for either my projects or other non-available rpms, and what I came up with is: http://yum-my.appspot.com/flat_web/index.htm

it's nothing fascinating but just a re-mixed usage of project created from gae-flat-web.

you can access base skeleton of this re-mixed gae-yum-my (the easy way to host your yum repo online) at https://github.com/abhishekkr/gae-yum-my which also has rpm served for gae-flat-web.

now to see how you could get one too~

First Task, register a new portal on Google AppEngine (it's free for reasonable limited usage)using your Google Account. Say, your appengine portal is name MY-YUM-MY.

Now the fun begins.

cloned the starter kit
$ git clone
enter the cloned repo
$ cd gae-yum-my
to actually change your application name in app.yaml
$ sed -i 's/yum-my/MY-YUM-MY/g' app.yaml
create the required linux distro, release branch
$ mkdir yummy/<distro><releasever>/<basearch>
copy all required RPMs in that distro, release branch
$ cp <ALL_MY_RPMS_of__DISTRO_ReleaseVer_BaseArch> yummy/<distro><releasever>/<basearch>/
prepare yum-repo-listing using createrepo command
$ createrepo yummy/<distro><releasever>/<basearch>/ 
now, place a file 'flat_web/yum-my-el6<or-whichever>.repo' with content 
[yum-my-<distro><releasever>-<basearch>] 
name=MY-YUM-MY 
baseurl=http://MY-YUM-MY.appspot.com/yummy/<distro>$releasever/$basearch 
enabled=1 
gpgcheck=0

and can link this file on your 'flat_web/index.htm' homepage 

 to host: 
$ <google_appengine_path>/appcfg.py update <MY-YUM-MY_path>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

now you yum repo has a homepage at http://MY-YUM-MY.appspot.com

and placing tthe *.repo file above with hinted content will get the RPMs added to you repo accessible.

Sunday, July 8, 2012

Puppet ~ a beginners concept guide (Part 2) ~ road to Modules


Puppet
beginners concept guide (Part 2)
The Road To Modules

[] Puppet Modules?  (no, no..... nothing different conceptually)
Puppet Modules (like in most other technological references, and the previous part of this tutorial)  are libraries to be loaded and shared as per the required set of configuration.

Think if you have a war application to be deployed over tomcat. For the stated requirement you require tomcat to be present on the machine with correct required configurations and war file to be correctly downloaded and placed on the machine with correct permissions.
In a general scenario requirement like this, two modules come up. One to install, configure and service-start tomcat service. Another to download/locate war file, use tomcat's configure and service sub-module.

[] Logic of Structure Logic  (just how is your module structured and )
The different components of structural design followed by each puppet module:
  • manifests
    All your '<module/submodule>.pp' manifest files go into '<module_dir>/manifests'.
    Puppet has an auto-load service for modules/sub-modules, so the naming of these *.pp files should be suiting the class names.
    As discussed above for a 'tomcat' module, you are also gonna create sub-modules like 'tomcat::install', 'tomcat::configure', and 'tomcat::service'.
    So the files that will get create be '<tomcat-module>/manifests/install.pp', '
    <tomcat-module>/manifests/configure.pp',  '<tomcat-module>/manifests/service.pp'.
    Now if there would have been a sub-module like 'tomcat::configure::war',  then the file-path would go like '
    <tomcat-module>/manifests/configure/war.pp'.
  • templates
    As any other language, where you want some static data merged with varying passed-on or environment 
     variables and pushed in somewhere as content. Say, for 'tomcat::config' sub-module as you wanna parameter-ize somethings like 'war' file name. Then this war file-name is being passed-on by 'deploy_war' module.
    This ruby template goes in
     '<tomcat-module>/files/war_app.conf.erb' and whenever required it's content received as "template('<tomcat-module>/war_app.conf.erb')"
  • files
    Any kin'of static file can be served from a module using puppet's fileserver mount points. Every puppet module has a default file-server mount location at '<tomcat-module>/files'.
    So a file like '<tomcat-module>/files/web.war' get to be served at Puppet Agents pointing to source of 'puppet:///modules/<tomcat-module>/web.war'.
  • lib
    This is the place where you can plug-in your custom mods to puppet and use your newly powered up puppet features.
    This is the one feature that lets you actually utilize your ruby-power and add-on custom facts, providers & types (with default location at '<tomcat-module> /lib/ <facter|puppet>', '<tomcat-module> /lib/puppet/ <parser|provider|type>') to be used via puppet in your modules. To be used it requires 'pluginsync = true' configuration to be present at 'puppet.conf' level.
    We'll discuss this in more detail with all sorts of examples in following up blogs and add the links here. Until then it can be referred at docs.puppetlabs.com.
  • spec/tests
    As Love needs Money to avoid worldly issues affect its charm. Similarly, Code need Tests.  In location 
    '<tomcat-module>/spec/' you can have your puppet-rspec tests for puppet module.
    The path '<tomcat-module>/tests/' would have common examples on how the module classes would be defined.



[] Modules Fundamental Live  (mean the actual code sample.....)


Friday, June 8, 2012

rss-motor v0.0.4 ~ a rubygem to ease up your RSS interactions

started a new rubygem project 'rss-motor' (http://rubygems.org/gems/rss-motor) to aid all RSS consuming code by providing the entire (or filtered as per choice) of feeds as Array of Hash values per Feed Item.
===============================================
 ||}}  //\  //\ _ ||\/|| ||@|| ~++~ ||@|| ||))
 ||\\ _\\  _\\    ||  || ||_||  ||  ||_|| ||\\
===============================================

I tried it in a new project 'rss-fanatic' (https://github.com/abhishekkr/rss-fanatic) making to help out RSS Feed fanatics collecting required content without pain of browsing/saving/downloading. Though RSS-Fanatic project is just started and shall be usable in some time soon.



Here is just a mini HowTo easily power you code with rss-motor:

First, obviously you'll need to install the gem
$  gem install rss-motor
or if luckily you already have a Gemfile usage, add following lines to it
source "http://rubygems.org" gem 'rss-motor'

Now, currently available engines from the rss-motor
  • simple way of getting all the items as array of key=value
    puts Rss::Motor.rss_items 'http://news.ycombinator.com/rss'

  • get an array of items filtered by one or more keywords
    puts "#{Rss::Motor.rss_grep 'http://news.ycombinator.com/rss', ['ruby', 'android']}"

  • to filter even the data of content available at >link/< field present in item plus normal filter
    puts "#{Rss::Motor.rss_grep_link 'http://news.ycombinator.com/rss', ['ruby', 'android']}"

now go on, ride your own rss-bikes.....

Wednesday, June 6, 2012

Get Set Go Lang ~ part#1

Get Set GO Lang
part# 1
_________________________

What Is Go Lang?
(in case you just came here while curious web surfing)


Go is an OpenSource programming platform developed by Google (and contributors) to be expressive and efficient at the same point.
It's distributed under BSD-style License
It's a concurrency favoring, statically typed, compiler-based language. Though it declares to be giving ease like dynamically typed interpreted code.
_________________________

On your mark, Get Set GO
(getting started with the quick boost usage)

To directly start playing with Go Lang, visit http://play.golang.org/,
where you can directly type/paste in your go-lang code in an online editor and run to get output.

Just a small ++HelluvaWorld code-piece
package main 
import ("fmt"
        "time"
        "os"
        "math" )
func main() {
  fmt.Println("Today is ", time.Now().Weekday())
  fmt.Println("env as ", os.Environ())


  fmt.Println("A Pi on Ceil looks like ",
               math.Ceil(math.Pi), 
      " and a Pie on Floor looks like", 
      math.Floor(math.Pi))
}
[] Installing it for local & full-flown development practice http://golang.org/doc/install would guide you getting 'go' working on your Linux, FreeBSD, OSX & Win platforms.
_________________________

Rewind before the Start Line and take your First Leap
(first useful step to starting use of Go Lang)

[] quickie at variables and constants, a look at GO's declaration style
// var used to declare variable with type at end
var a, b, c int
// direct initialization doesn't require providing type
var x, y, z = 1, true, "yes"
// constants just require a 'const' keyword
const newconst = 10
func tellvar() {
a, b, c = newconst + 1, newconst + 2, newconst + 3
// inside a function, even := construct
// could be used to assign and not use 'var'
clang, java, ruby := "dRitchie", "jGosling", "Matz"
fmt.Println(a, b, c, x, y, z, clang, ruby, java)
}
now, you also know '//' is to comment as in C/C++ and more.

[] mobilizing functions
just an emulation of 'math' libraries 'pow' method (also a look at using for loop)
package main 
import "fmt" 
func pow(x int, y int) int {
  a := 1
  for i := 0; i < y; i++ {
    a = a * x
  }
  return a
}
 
func main() {
  fmt.Println( "2 to the power of 5 is ", pow(2, 5) )
}

[] some function parameters style, the pow above is same as
func pow(x, y int) int { ... }

[] function returning multiple values
dream come true of how a function can return any number of values (also use of if condition)
func plusminus(a, b int) (int, int) { 
     if a > b {return a+b, a-b} 
     return a+b, b-a
}
  or could be like
func plusminus(a, b int) (plus, minus int) { 
     plus = a + b 
    if a > b {   
                   minus = a - b  
     } else {    
       minus = b - a  
     }   
     return
   }

usage:
plus, minus := plusminus(1, 2)
fmt.Println("Plus: ", plus,
            "\nMinus: ", minus)

[] more to go..... in more to come


_________________________

Shops to Go
(other fine links to Go, until next part of this tutorial comes)
_________________________

Tuesday, May 29, 2012

Puppet ~ a beginners concept guide (Part 1)

Someone asked me where to start with Puppet learning. I pointed them at PuppetLabs online doc for Puppet, which is actually nice for a proper understanding. 
But for someone trying to start with Puppet, that documentation is a bit long to read similar to any book. 
I searched for few blogs, but didn't found any content (short but enough, fundamentals but usable) that I was looking for.
____________________________________________________


Puppet
beginners concept guide (Part 1)

[] What  it  is?  When  is  it  required?  (for all new guys, who came here while just browsing internet)
Puppet is an OpenSource automated configuration management framework (which means a tool that knows how to configure all machines to a deterministic state once you provide it the required set of manifests pulling the correct strings).
It's managed at enterprise level by an organization called PuppetLabs (http://puppetlabs.com/).

It is required#1 when you have a hell-lot of machines required to be configured in a similar form.
It is required#2 when you have a infrastructure requirement of dynamic scale-up and scale-down of machines with a pre-determined (or at least metadata-calculated) configuration.
It is required#3 to have a control over all set of configured machines, so a centralized (master-server or repo-based) change gets propagated to all automatically.
And more scenarios come up as you require it.

_____________________________________


[] Quickie.

Install Ruby, Rubygems on your machine where you aim to test it.
$ gem install puppet --no-ri --no-rdoc
Download installers @Windows  @MacOSX ::&:: Docs to installing.

Checking, if it's installed properly and acting good
Now, 'puppet --version' shall give you the version of installed puppet once succeed.
Executing 'facter' and you shall get a list of System Environment related major information.

Have a quick puppet run, this shall create a directory '/tmp/pup' if absent. Creates a file '/tmp/pup/et' with 'look at me' as its content.
{In case of trying out on platforms without '/tmp' location. Like for Windows, change '/tmp' with 'C:/' or so}

$ puppet apply -e "file{'/tmp/pup':
                               ensure => 'directory'}
                             file{ '/tmp/pup/et':
                               ensure => 'present',
                               content => 'look at me',
                               require => File['/tmp/pup']}
                           "

_____________________________________


[] Dumb  usage  structure.
Create huge manifest for your node with all resources & data mentioned in it. Then directly apply that manifest file instead of '-e "abc{.....xyz}"'.

Say if the example above is your entire huge configuration commandment for the node. Copy all that to a file say 'mynode.pp'.
Then apply it similarly like
$ puppet apply mynode.pp

_____________________________________


[] How  it  evolves?

Now, as any application had pluggable library components to be loaded and shared as and when required. Puppet too have a concept of modules. These modules can have manifests, files-serving and more.

Modules can be created in any design preference. Normally it works by having different modules per system component. To entertain different logical configuration states for any given system component (and also keeping it clean) further re-factoring can be done in the modules' manifest dividing it into different scopes.

Taking example of a module for 'apache httpd'. For a very basic library, you might wanna structure your module like

  • a directory base for your module:  <MODULE_PATH>httpd/
  • a directory in module to serve static files:   <MODULE_PATH>/httpd/files
  • static configuration file for httpd:   <MODULE_PATH>/httpd/files/myhttpd.conf
    AccessFileName .acl
  • directory to hold your manifests in module:   <MODULE_PATH>/httpd/manifests/
  • a complete solution manifest:   <MODULE_PATH>/httpd/manifests/init.pp
    class httpd{
      include httpd::install
      include httpd::config
      include httpd::service
    }
  • a manifest just installing httpd:    <MODULE_PATH>/httpd/manifests/install.pp
    class httpd::install {
      package {'httpd': ensure => 'installed'}
    }
  • a manifest just configuring httpd:    <MODULE_PATH>/httpd/manifests/config.pp
    class httpd::config{
      file {'/etc/httpd/conf.d/httpd.conf':
        ensure => 'present',
        source => 'puppet:///modules/httpd/myhttpd.conf'
      }
    }
  • a manifest just handling httpd service:  <MODULE_PATH>/httpd/manifests/service.pp
    class httpd::service{
      service{'httpd': ensure => 'running' }
    }

Now, using it

  $ puppet apply --modulepath=<MODULE_PATH>  -e "include httpd"
would install, custom-configure and start the httpd service.


  $ puppet apply --modulepath=<MODULE_PATH>  -e "include httpd::install"
would just install the httpd service.



________________________________________________________________

Part2: Road to Modules

Saturday, March 24, 2012

xml-motor ~ what it is; how & why should you use it

xml-motor ~ what it is; why & how you should use it

Download this article as pdf on what,why,how
http://speakerdeck.com/u/abhishekkr/p/xml-motor-whatwhyhow-this-xml-parsing-rubygem#


or read it all here.....

Late 2011, I started with a new rubygem project for parsing xml, html content.
  @Rubygems: http://rubygems.org/gems/xml-motor
  @GitHub     : https://github.com/abhishekkr/rubygem_xml_motor

Just created it to test out my work at compact, quick & easy xml-parsing algorithm... can see that
  @Slideshare: http://www.slideshare.net/AbhishekKr/xmlmotor

So, currently this is a non-native, completely independent less-than-250 ruby-LOC available as a simple rubygem to be require-d and use in an easy freehand notation and match with any node attributes.

Current Features:
  • Has a single method access to parse require xml nodes from content or file. Use it only if you are gonna parse that xml-content once. For using same xml-content more than once, follow the 3-way step mentioned in examples.
  • It doesn't depend on presence of any other system library, purely non-native.
  • It parses broken or corrupted xml/html content correctly, just for the content it have.
  • Can parse results on looking for node-names, attributes of node or both.
  • Uses free-freehand notation to retrieve desired xml nodes
    if your xml looks like,
    '<library>...
      <book> <title>ABC</title> <author>CBA</author> </book>...
      <book>
        <title>XYZ</title>
         <authors> <author>XY</author><author>YZ</author> </authors></book>...
    </library>'

    and you look for 'book.author',
    then, you'll get back ['CBA', 'XY', 'YZ'];
    what that means is the child-node could be at any depth in the parent-node.
  • Default return mode is without the tags, there is a switch to get the nodes.
    as you'd have seen in above example:
    'CBA' gets sent by default, not 'CBA'
  • To filter your nodes on the basis of attributes, single or multiple attributes can be provided.
  • These attribute searches can be combined up with freehand node name searches.
  • Readme (a bit weird): https://raw.github.com/abhishekkr/rubygem_xml_motor/master/README


Features To Come:
  • Work on making it more performance efficient.
  • Limit over result-nodes retrieved from start/end of matching nodes.
  • Multi-node attribute-based filter for a hierarchical node search.
  • Add dev-knows CSS Selector, it's already present using attribute based search... just need to add a mapping method.


EXAMPLES of usage:
example code to try: https://github.com/abhishekkr/axml-motor/tree/master/ruby/examples
  • say, you have an xml file 'dummy.xml', with data as
    <dummy>
      <ummy>    <mmy class="sys">non-native</mmy>  </ummy>
      <ummy>
        <mmy class="sys">      <my class="sys" id="mem">compact</my>    </mmy>
      </ummy>
      <mmy type="user">    <my class="usage">easy</my>  </mmy></dummy>
  • its available at rubygems.org, install it as
      $ gem install xml-motor
  • include it in your ruby code,
      #!/usr/bin/env ruby
      require 'xml-motor'
  • get the XML Filename and/or XML data available
      fyl = File.join(File.expand_path(File.dirname __FILE__),'dummy.xml')
      xml = File.open(fyl,'r'){|fr| fr.read }
  • One-time XML-Parsing directly from file
      XMLMotor.get_node_from_file(fyl, 'ummy.mmy', 'class="sys"')
         Result: ["non-native", "\n      compact\n    "]
  • One-time XML-Parsing directly from content
      XMLMotor.get_node_from_content xml, 'dummy.my', 'class="usage"'
         Result: ["easy"]
      
  • Way to go for XML-Parsing for xml node searches
      xsplit = XMLMotor.splitter xml
      xtags  = XMLMotor.indexify xsplit


      [] just normal node name based freehand notation to search:

        XMLMotor.xmldata xsplit, xtags, 'dummy.my'
        Result: ["compact", "easy"]
      [] searching for values of required nodes filtered by attribute:
        XMLMotor.xmldata xsplit, xtags, nil, 'class="usage"'
        Result: ["easy"]

      [] searching for values of required nodes filtered by freehand tag-name notation & attribute:

        XMLMotor.xmldata xsplit, xtags, 'dummy.my', 'class="usage"'
        Result: ["easy"]

      [] searching for values of required nodes filtered by freehand tag-name notation & multiple attributes:

        XMLMotor.xmldata xsplit, xtags, 'dummy.my', ['class="sys"', 'id="mem"']
        Result: ["compact"]

Monday, March 5, 2012

messQ ~ just a fun little project providing socket-based Queue service

messQ is a small project started to implement and improve in the areas of message queue mechanisms.

What it does currently? Just a Network Service to be connect and enqueue/dequeue messages.

What it requires? Ruby, terminal and your fingers :)

Git it:           $ git clone git://github.com/abhishekkr/messQ.git
Download:   https://github.com/abhishekkr/messQ/tarball/master

Start messQ server:       $ ruby messQ.rb
  This starts a message queue server at  port 8888.

Enque new message:
  Open a connection at port 8888, then say "enq MESSAGE_TO_BE_QUEUED".

Deque oldest message:
  Open a connection at port 8888, then say "deq". It returns the dequed message.


+++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++
 _  _   __   ___  ___     _____
||\/|| //_  //_  //_  _  //  //  messQ v0.0.1beta
||  || \\_  __// __//   //__//\\_  simplistic socket message Q

+++++++++++++++++++++++++++++++++++++++++++++++++++++++