Wednesday, October 22, 2014

What qualifies a Project to be a Product?

Disclaimer:
This post is based upon a piece of dream (not fiction). Any relation to any enterprise grade product is purely coincidental.

---

Now, this is not a flame post against Proprietary Software. I'm a FOSS supporter but with understanding that for some Business... 
* paid 24x7 support is a lot more critical than quality
* need to put trust in a product where the Vendor is bound by agreement to help
which is perfectly fine.
Depending on a business requirement and policies, there need to be different solutions provided. Some community supported FOSS and then many large corporations (backed for years) Enterprise Software.

Again, no this post is not on license of software.
This post is on some core values of a software (FOSS or Proprietary) that make it eligible to be used at a worthy scale in a Organization that is depending on it.
These also stand true for any user of that Software Project, but are more crucial for users depending their economy on it.
And are ethically right of the Corporations which are paying big bucks for a piece of code sold to them on high brand and big promises.

Here on I use "Products" for all paid-for (some Enterprise grade) Projects that I got to work on since college and experience the truth underneath.
---

So... What qualifies a Project to be a Product?

  • 2 old-skool fundamental mantras: loose coupling; high cohesionThere are "Products" with bunch of modules that have separate responsibilities. Correct approach. But when you start to build around them, sometimes they are not that independent in containing self responsibilities.
    That's worse than not having your project modular at all, at least that way you promote users to treat it as a black-box.
    "Product" need to be well modular-ized emitting the age old beautiful development practices of keeping your modules loosely coupled, strongly cohesive.
  • isolated from client specific details
    Anything and everything that depicts details about client specific implementation need to be managed as a configuration provided explicitly. There shouldn't be any necessity of find-replace any hard-coded "config text" in source or set-up files for the "Product". These details shan't be required to be packaged with custom traditional set-up of "Product".
  • generic out-of-the-box setup... tested and isolated
    The "Product" installer binaries shall be self-contained. They can be O.S. distribution specific, which is fine.
    They should be unaffected of O.S. level restrictions that may be there or not. Say if SELinux is enabled, your setup shall be able to initiate changes required for the "Product" to function.
    Any dependency shall be either bundled in the installer package, or shall utilize the targeted package manager to lay it down for itself.
    The installer shall be tested completely in the mode it is supposed to be used. For example, if it sets up the machine remotely then it shall be capable of handling all tasks remotely by itself and not depend on user to place some files for it first or in-between.
  • no mesh between services required for initiation
    The "Product" might have distributed architecture support, might have modular components collaborating.
    Now these distributed components need to be aware of each other to be able to collaborate. The components shall be robust enough to handle unavailable dependency components and regain activity once available.
    This awareness can be maintained on a component dedicated to dynamic configuration query and update, where all component instances export details about self and gather information on others. Environment preparation mechanism can populate the initial dynamic information which on requirement can be updated in collaboration-enabling component and gathered by others.
    Every component can persist information required for it within itself as well, but that shouldn't be dependent on the other components. If component-F collaborates with component-A and requires component-A to mark some activities for it even to start itself. Bad design.
  • don't promote deprecated technology in newer component
    Some "Product" have big lifetime, depending on their vastness &/or critical nature. This might lead to existence of certain obsolete technologies (like SOAP in yr2014) usage in newer components.
    You shouldn't start rebuilding entire perfectly working "Product" for that, yes. But neither shall you use that as an excuse for holding back entire new development around it. It will cripple the "Product" much faster of surviving in newer circumstances if it can't use the power required for them.
    Build a mid-layer contract API and seclude your new work over your legacy "Product". Then build new features over that mid-layer API.
    This will help you avoid "Product" becoming a bloatware just because new features can't be used inherently from the design itself. It will also help contain the sanity of perfectly working "Product" from some (library, etc.) changes required for new features. And will let you use more advance current best practices for all the newer work, not cripple it.
    If not "Product" in entirety; at least every individual component in itself shall follow unified design/development/interface/platform strategy... it shouldn't have bifurcation of ideals, if necessary a new component shall be carved out and plugged-in instead of corrupting the existing piece.
  • different style of code, different degree of documentation
    If your code-base is not very huge, beautifully written clean and modular code can survive without documentation.
    If your code is not clean (don't judge it yourself, ask someone expert in language/framework but unaware of logic to guess) then have freaking documentation all over it.
    If your code-base is "really" huge, even if your code is mostly clean... have a basic module level documentation at least.
    So anyone using it knows how to handle them and build upon or around them. Anyone visiting the code-base many years after (if you think your "Product" is capable) with lot improved language features is still able to make sense of what was carved with stone on cave walls.
  • building upon/around it shouldn't involve hack-y ways
    In small home-brewed solutions, unpack-replace-repack is still arguably accepted solution.
    In a freaking "Product", it shall expose an elegant yet secure API interface to extend features. If it allows overriding of existing features, even they shall be plugged in via that interface causing the built-in feature to be subdued.
    It's not effortless but is a sane and secure method. The "Product" need this embedded in it's design.
  • for all kinds of data-set involved, should have a proper data-modification strategy
    Hand modification seem harmless at development stage, even at testing sometimes. In production if any kind of data modification done during setup or updates doesn't have a migration/rollback strategy around it, it's a danger zone signal.
    Like proper DB migration scripts over (and not plain diffs of) current version of scripts. I might seem obvious as all other points, but there are bunch of Product missing one or other variations of it.
  • take care of sensitive data involved even during setup of "Product"
    If your Product in any manner forces display or storage of sensitive data (usernames, passwords, machine names, network details, yada yada yada), danger zone again.
  • have at least some level of sanity tests for all logic flows
    Even in this age of software development, if importance of software testing (acceptance, regression, security) need to be advocated to you, it's disaster.
    To state obvious... any change can be tracked properly, all technologists have some level of idea and trust on what's where doing what, people picking it much later can build over it without breaking anything.


and some things, but that for some other time.....

---

LONG QUESTION:
Could a piece of software sold by "so called tech moghul" Organizations be so crappy that the only purpose it fills is vendor lock-in for years of licensing and not even the creators know it's buggy to its core?
SHORT ANSWER:
Yes! So chose with care.

Thursday, September 18, 2014

base checklist: 10 points to decide if chose an OSS for Production or not

One question you get from skeptics (who are actually really important for quality check) while discussion on picking an OpenSource solution instead of a support attached closed source one. Which is how to trust it to be safe for Production release.

The question actually even suits which OSS to pick, when there are several.

Question we are trying to answer here is... How to pick an OpenSource solution that will live long and prosper, not turn into a rot that smells on any change in Production on updates over period of time.

First just to mention again that almost every Technologist already understands. There is no guarantee just "ensured" support over closed source software to guarantee it's safety or supporting future technical growth. I don't wanna dwell into the dangers that it brings in, 'cuz this is not the post for that. That's entirely other exhausted list.

So on what to see in OpenSource software that helps you decide it is trusted to be included for production release...


While weighing in for inclusion of any big or small OpenSource utility into your Production list, following checklist shall help:

1*) OSS have Licenses too
First of all check if their License suits inclusion with licensing of your project. Example, People have been seeking ways (somewhat succeded) to get ZFS on GNU/Linux.

2*) Is project active "enough"
Second quick check is seeing if project has been inactive for a dangerous period. Now for every kind of project, a dangerous period differs widely. Would have to depend on better judgment of self and trusted community you know. Like for a library providing certain algorithm, post stable release changes would be a lot slower. But for a webdev framework, with current tradition... it'll be popping new minor releases now and then.

Now few things for which you'll need to read around a little....
Sources to recon about following attributes: Mailing lists, Issue boards, IRC, Twitter streams, may be others depending on project

3*) How much active and inclusive is its community
How well do they handle PullRequests and Issues raised on their project. This includes the readiness on response and adapting a better direction, both but mainly former.
How well they handle risks and vulnerabilities reported, if any. Quickest patch is not the main measure, most important is accepting it and providing a workaround till main issue gets resolved.

4*) Good core team matters (they need not be very popular)
Check who forms core team maintaining that OSS. Some other projects of their, even if not popular would give you an idea on how much and how well they maintain their projects. 

5*) If Industry already loves it
Not a litmus test though strengthens community support and quality check.
Look for who all in Industry is already using it mainstream, also if you like the softwares they have developed. Just shoot a tweet/mail to them... people are mostly helpful. Don't give up on humanity. ;)

6*) Need to scan it personally anyhow
Try it in a sandbox first, monitor it's not spawning requests to domains it's not supposed to. Not creating any suspicious behavior you don't expect from it.
Also, it survives your production security lockdown, not all projects behave same under restrictions.

7*) Send it on a marathon
Put it under performance test yourselves. There might be preexisting load test results available, and might be accurate as well. But not all implementations suit all projects. Check it under PoC of your implementation behavior with expected concurrency and latency.

8*) Does it tailor fit
If it actually provides what you desire without putting a hack around, give it a chance. If not so, confirm that it suits the design and wouldn't break with project philosophy from maintainers over the coming recent versions at least.

9*) How easy is to resolve an issue
Is project community/developers active enough to help guide around any problems faced.

10*) Do you love supporting FOSS
If yes, welcome to the world of awesomeness. Some mediocrity (not below that, then look something else) at some of points above would only drive you strengthen the project. It's opensource, at least technologists are not supposed to live with the problem if faced.

Monday, February 3, 2014

golang ~ get local changes into GOPATH without pushing them upstream

To get your local Golang repo's sym-linked at your GOPATH and local changes available...

goenv_link(){
  if [ $# -ne 2 ]; then
    echo "Links up current dir to it's go-get location in GOPATH"
    echo "SYNTAX: goenv_linkme  "
    return 1
  fi
  _REPO_DIR=$1
  _REPO_URL=$2

  _TMP_PWD=$PWD
  cd $_REPO_DIR

  if [ -d "${GOPATH}/src/${_REPO_URL}" ]; then
    echo "$_REPO_URL already exists at GOPATH $GOPATH"
    go get "${_REPO_URL}"
    return 1
  fi
  _REPO_BASEDIR=$(dirname "${GOPATH}/src/${_REPO_URL}")
  if [ ! -d "${_REPO_BASEDIR}" ]; then
    mkdir -p "${_REPO_BASEDIR}/src"
  fi

  ln -sf "${PWD}" "${GOPATH}/src/${_REPO_URL}"
  go get "${_REPO_URL}"

  cd $_TMP_PWD
}

alias goenv_linkme="goenv_link $PWD"

---


Every now and then working on my favorite new programming language Golang, I have inter-dependent changes among different packages. To confirm their as-required working state, I'd like the GOPATH to provide the compiled object with local-changes included.

The utility I've been using to push local package changes to GOPATH provided object is following "goenv_alpha" bash function as a shell-profile provided utility.

Say, I've a golang project "github.com/abhishekkr/goshare" which utilizes "github.com/abhishekkr/goshare/httpd", "github.com/abhishekkr/goshare/zeromq" and few more.

If I make some local changes at "{PROJECTS}/goshare" and "{PROJECTS}/goshare/httpd". To push those into GOPATH provided package for testing, following commands using below function "goenv_alpha" shell-util would do the job...

$ goenv_alpha "{PROJECTS}/goshare" "github.com/abhishekkr/goshare"
$ goenv_alpha "{PR..}/goshare/httpd" "github.com/abhishekkr/goshare/httpd
"

These commands will ask you to make a backup file for current existing version of package resource from GOPATH, you can give any name... which will be asked while restoring or you can leave it empty to avoid creating a backup file.

~

goenv_alpha(){   _TMP_PWD=$PWD   if [ $# -ne 2 ]; then echo "Provide Alpha changes usable as any other go package."     echo "Just the import path changes to 'alpha/'"     echo "SYNTAX: goenv_alpha "     return 1   fi _REPO_DIR=$1   _REPO_URL=$2   cd $_REPO_DIR   _PKG_PARENT_NAME=$(dirname $PWD)   _PKG_NAME=$(basename $PWD)
  _PKG_NAME_IN_REPO=$(basename $_REPO_URL)   if [ $_PKG_NAME_IN_REPO != $_PKG_NAME ]; then echo "Path for creating alpha doesn't match the import 'url' for it."     return 1   fi   `go build -work . 2> /tmp/$_PKG_NAME`   _BUILD_PATH=`cat /tmp/$_PKG_NAME | sed 's/WORK=//'`   if [ ! -d $_BUILD_PATH ]; then echo "An error occured while building, it's recorded at /tmp/$_PKG_NAME"     return 1   fi rm -f /tmp/$_PKG_NAME   _CURRENT_OBJECT_PATH="${GOPATH}/pkg/${GOOS}_${GOARCH}"   _CURRENT_OBJECT="${_CURRENT_OBJECT_PATH}/${_REPO_URL}.a"   _NEW_OBJECT="${_BUILD_PATH}/_${_PKG_PARENT_NAME}/${_PKG_NAME}.a"   echo "Do you wanna backup current object? If yes enter a filename for it: "   read GO_ALPHA_BACKUP   if [ ! -z $GO_ALPHA_BACKUP ]; then mv $_CURRENT_OBJECT "${_CURRENT_OBJECT_PATH}/${_REPO_URL}/${GO_ALPHA_BACKUP}.backup"   fi mv $_NEW_OBJECT $_CURRENT_OBJECT   cd $_TMP_PWD   echo "\nAlpha changes have been updated at ${_CURRENT_OBJECT}." }

~

You can undo the pushing of local changes inclusive package resource if you have created a backup file for earlier existing file.

Following commands utilizes the below provided shell-util function "goenv_alpha_undo".

$ goenv_alpha_undo "{PROJECTS}/goshare" "github.com/abhishekkr/goshare"
$ goenv_alpha_undo "{PR..}/goshare/httpd" "github.com/abhishekkr/goshare/httpd"

This will list you the names of backup files present if any, then you can provide the name of your chosen backup file and restore to that package state.

~
goenv_alpha_undo(){
  _TMP_PWD=$PWD
  if [ $# -ne 2 ]; then
    echo "Provide Alpha changes usable as any other go package."
    echo "Just the import path changes to 'alpha/'"
    echo "SYNTAX: goenv_alpha  "
    return 1
  fi _REPO_DIR=$1
  _REPO_URL=$2   cd $_REPO_DIR   _PKG_PARENT_NAME=$(dirname $PWD)   _PKG_NAME=$(basename $PWD)   _PKG_NAME_IN_REPO=$(basename $_REPO_URL)   if [ $_PKG_NAME_IN_REPO != $_PKG_NAME ]; then echo "Path for creating alpha doesn't match the import 'url' for it."     return 1   fi _CURRENT_OBJECT_PATH="${GOPATH}/pkg/${GOOS}_${GOARCH}"   _CURRENT_OBJECT="${_CURRENT_OBJECT_PATH}/${_REPO_URL}.a"   _BACKUP_OBJECT="${_BUILD_PATH}/_${_PKG_PARENT_NAME}/${_PKG_NAME}.a"   echo "Available package files are:"   ls -1 $_CURRENT_OBJECT_PATH/$_REPO_URL | grep $_PKG_NAME | grep -v grep   echo "Enter your backup filename for it: "   read GO_ALPHA_BACKUP   if [ -z $GO_ALPHA_BACKUP ]; then echo "\nNo Backup file was entered." ; return 1   fi mv "${_CURRENT_OBJECT_PATH}/${_REPO_URL}/${GO_ALPHA_BACKUP}" $_CURRENT_OBJECT   cd $_TMP_PWD   echo "\nAlpha changes have been reverted with the provided backup file." }
~

The full [WIP] shell-profile for golang utilities is at:
https://github.com/abhishekkr/tux-svc-mux/blob/master/shell_profile/a.golang.sh