Tuesday, February 9, 2016

Check LUN Ownership on the EMC VNX Array RUBY + Naviseccli

#!/usr/bin/ruby
require 'colorize'
require 'optparse'
require 'terminal-table'
######################################################################
##################################################################################################################################################
## Check Pool LUN Ownership
##
## lists the pool luns and their current default, and allocated owner
## current - the SP that currently owns the LUN
## default - the SP that the LUN normally lives on
## allocated - the SP that the LUN was initially created on
##
## Written by Wandering Generalist
## Logic taken from a Perl Script of the same name Written by Mat Harvest
##################################################################################################################################################
##
##
## Revision Control
## 0.2 initial release
##
##################################################################################################################################################
##################################################################################################################################################
options = { :array_ip => nil, :user_id => nil, :password => nil, :scope => nil }
OptionParser.new do|opts|
opts.banner = "Usage: checklunowneship.rb [options]"
opts.on('-a','--array_ip IP Address', "IP address of the Array".colorize(:yellow) ) { |array_ip|
options[:array_ip] = array_ip
}
opts.on('-u','--user_id UserID', "User ID to login to the array".colorize(:yellow) ) { |user_id|
options[:user_id]=user_id
}
opts.on('-p','--password Password', "Password to login".colorize(:yellow)) { |password|
options[:password]=password
}
opts.on('-s','--scope Scope', "Scope of the User".colorize(:yellow)) {|scope|
options[:scope]=scope
}
opts.on('-h','--help', 'Displays Help'.colorize(:yellow)) do
puts opts
exit
end
end.parse!()
if options[:array_ip] == nil
puts "Array IP Address is missing!!!".colorize(:red)
puts "checklunowneship.rb --array_ip IP Address --user_id username --password Password --scope Scope".colorize(:yellow)
exit
end
if options[:user_id] == nil
puts "User Name is missing!!!".colorize(:red)
puts "checklunowneship.rb --array_ip IP Address --user_id username --password Password --scope Scope".colorize(:yellow)
exit
end
if options[:password] == nil
puts "Password is missing !!!".colorize(:red)
puts "checklunowneship.rb --array_ip IP Address --user_id username --password Password --scope Scope".colorize(:yellow)
exit
end
if options[:scope] == nil
puts "Scope entry for the User is missing!!!".colorize(:red)
puts "checklunowneship.rb --array_ip IP Address --user_id username --password Password --scope Scope".colorize(:yellow)
exit
end
arrayip=options[:array_ip]
username=options[:user_id]
passwd=options[:password]
scope_id=options[:scope]
owners=`/opt/Navisphere/bin/naviseccli -h "#{arrayip}" -user "#{username}" -password "#{passwd}" -scope "#{scope_id}" lun -list -isPoolLUN -alOwner -owner -default`
stats=owners.split("\n\n")
lundetails=Struct.new(:LUNID, :LUNName, :CurrentOwner, :AllocationOwner, :DefaultOwner, :Status, :TressPassed)
fm=[]
stats.each do | line |
lunID=line.split("\n")[0].split(" ")[3].to_s
lunName=line.split("\n")[1].gsub('Name: ','')
curOwner=line.split("\n")[2].split(":")[1].strip
allocOwner=line.split("\n")[4].split(":")[1].strip
defOwner=line.split("\n")[3].split(":")[1].strip
if curOwner == defOwner && defOwner == allocOwner
line = lunID + "," + lunName + "," + curOwner + "," + allocOwner + "," + defOwner +"," + "Optimal" + "," + "No"
fm << line
else
line = lunID + "," + lunName + "," + curOwner + "," + allocOwner + "," + defOwner +"," + "Non Optimal" + "," + "yes"
fm << line
end
end
#puts fm
format = '%-15s %-8s %-7s %-7s %-7s %s'
table = Terminal::Table.new :title => "LUN OWNERSHIP REPORT", :headings=> ["LUNID", "LUNName", "CurrentOwner", "AllocationOwner", "DefaultOwner", "Status", "TressPassed"] do |t|
fm.each do | l |
t << l.split(",")
end
end
puts table
view raw gistfile1.txt hosted with ❤ by GitHub


Friday, August 28, 2015

Daily Puppet - Virtual Resources

Virtual Resources are used when there is a need to have the same resource part of many classes.

Lets look at an example.

We have a user named "terry", who has to be part of two classes

1. test --> a very generic module for Ubuntu
2. testsql --> Another very generic module :), but applied to a different OS, CentOS

So, we define it first,

class test::sysusers inherits test {
user{'terry':
ensure => present,
uid => 4009,
home => '/home/terry',
shell => $shell,
}
}
view raw users hosted with ❤ by GitHub


This we will insert in two modules(manifests), test and testsql

class test ($value=$test::params::value,
$shell=$test::params::shell ) inherits test::params {
include test::virtual
contain test::config
User <|title == 'minder'|>
realize (User['hunter'])
class test::sysusers
}
view raw init.pp hosted with ❤ by GitHub

class testsql {
include ::test::virtual
notify { "I am from the testsql":}
class {'::test::sysusers': }
}
view raw sqlinit.pp hosted with ❤ by GitHub


We have assigned "test" module assigned to centos server named centosserver, and "testsql" module assigned to a Ubuntu system named sqlserver1.

node 'sqlserver1.labhome.com' {
class { 'testsql':}
}
node 'centosserver.labhome.com' {
class { 'test':}
}
view raw site.pp hosted with ❤ by GitHub


We will run now run the puppet agent on both the systems

[root@centosserver ~]# puppet agent -t
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
sh: dpkg: command not found
Info: Caching catalog for centosserver.labhome.com
Info: Applying configuration version '1440750961'
Notice: /Stage[main]/Test::Sysusers/User[terry]/ensure: created
Notice: Applied catalog in 0.14 seconds
root@sqlserver1:/home/user# puppet agent -t
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Evaluation Error: Error while evaluating a Resource Statement, Duplicate declaration: Class[Test::Sysusers] is already declared; cannot redeclare at /etc/puppetlabs/code/environments/production/modules/testsql/manifests/init.pp:6 at /etc/puppetlabs/code/environments/production/modules/testsql/manifests/init.pp:6:2 on node sqlserver1.labhome.com
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
root@sqlserver1:/home/user#
view raw gistfile1.txt hosted with ❤ by GitHub


Refer to the error message for sqlserver1. It says "Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Evaluation Error: Error while evaluating a Resource Statement, Duplicate declaration:"

To overcome this, we use virtual resources

class test::sysusers inherits test {
@user{'terry':
ensure => present,
uid => 5009,
home => '/home/terry',
shell => $shell,
}
}
view raw gistfile1.txt hosted with ❤ by GitHub


Look at the declaration, "@" with user, this indicates, the user resource declared is a virtual resource.

Now, we need to Realize the resource in init.pp in modules test and testsql , so that the resource is created on the target systems.

cat modules/test/manifests/init.pp
class test ($value=$test::params::value,
$shell=$test::params::shell ) inherits test::params {
include test::virtual
include test::sysusers
contain test::config
User <|title == 'minder'|>
realize (User['terry'])
}
cat modules/testsql/manifests/init.pp
class testsql {
include ::test::virtual
include ::test::sysusers
User <|title == 'minder'|>
User <|title == 'wallander'|>
User <|title == 'terry' |>
notify { "I am from the testsql":}
}
view raw gistfile1.txt hosted with ❤ by GitHub


From the snippet it can be seen that, realization is done in two ways,

a) realize (User['terry'])
b) User <|title == 'terry' |>

(Don't forget to include the class test::virtual in both the init.pp files, otherwise, we can not get the resource at all )

Now, we run the puppet agent on the clients again



With virtual resources, we can create user terry on both the systems.





Sunday, February 15, 2015

Daily Puppet - Puppet Environment Directories

Puppet environments can help to differentiate between the level of code, like production, staging, development. 

To setup the Directory environments, we need to make some changes to the default puppet.conf file.


[main]
# The Puppet log directory.
# The default value is '$vardir/log'.
logdir = /var/log/puppet
# Where Puppet PID files are kept.
# The default value is '$vardir/run'.
rundir = /var/run/puppet
# Where SSL certificates are kept.
# The default value is '$confdir/ssl'.
ssldir = $vardir/ssl
confdir = /etc/puppet
environmentpath = $confdir/environments
default_manifest = $confdir/manifests
[agent]
# The file in which puppetd stores a list of the classes
# associated with the retrieved configuratiion. Can be loaded in
# the separate ``puppet`` executable using the ``--loadclasses``
# option.
# The default value is '$confdir/classes.txt'.
classfile = $vardir/classes.txt
# Where puppetd caches the local configuration. An
# extension indicating the cache format is added automatically.
# The default value is '$confdir/localconfig'.
localconfig = $vardir/localconfig
view raw gistfile1.pp hosted with ❤ by GitHub


Once these changes are applied, we will need to restart the puppet master service. 

We have set the environment folders to be searched in the path “/etc/puppet/environments”, by using the configuration directive,
environmentpath = $confdir/environments


We will now create a staging environment for a test module “mytestmodule”

etc/puppet/environments/staging
tree
.
├── environment.conf
├── manifests
│   └── site.pp
└── modules
└── mytestmodule
└── manifests
└── init.pp
view raw gistfile1.sh hosted with ❤ by GitHub



Our init.pp file is a simple one, which will create a test file.We will assign the module to the node.

[root@centos staging]# cat modules/mytestmodule/manifests/init.pp
class mytestmodule {
file { "/home/testingenv.txt":
ensure => present,
}
}
[root@centos staging]# cat manifests/site.pp
node centos {
include mytestmodule
}
view raw gistfile1.txt hosted with ❤ by GitHub


Now, we test our environment,

puppet agent --noop --test --server centos  --environment=staging
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for centos
Info: Applying configuration version '1424034120'
Notice: /Stage[main]/Mytestmodule/File[/home/testingenv.txt]/ensure: current_value absent, should be present (noop)
Notice: Class[Mytestmodule]: Would have triggered 'refresh' from 1 events
Notice: Stage[main]: Would have triggered 'refresh' from 1 events
Notice: Finished catalog run in 0.03 seconds

This kind of set up is very helpful, when we want to separate our production code from the testing code. 










Wednesday, January 21, 2015

Daily Puppet 3 - Inheritance

With the inheritance we can set the order in which resources are applied on an agent.

A test manifest is shown below, it is rather simple one.


#Demonstrate the inheritance
class ownerfile {
file { "/tmp/owner.txt":
content => " I am the owner and the next file is depeandent on me",
before => File["/tmp/depenfile"]
}
file { "/tmp/depenfile":
content => " I am the Dependent of owner"
}
}
view raw gistfile1.rb hosted with ❤ by GitHub



Here we have defined a class “ownerfile”, which has a resource file “/tmp/owner.txt”. There is another file “/tmp/depenfile”, which will be created after the owners.txt file.


First owner.txt file is created and then depenfile is created.


Another approach is using the “require”

class ownerfile {
file { "/tmp/dependrequire.txt":
content => " My requirement is owner txt file\n ",
require => File["/tmp/owner.txt"]
}
file { "/tmp/owner.txt":
content => " I am the of owner\n"
}
}
view raw Demo1 hosted with ❤ by GitHub


In the above manifest it is shown that,
File resource “/tmp/dependrequire.txt”  defined which requires another resource which is a file resource too, “tmp/owners.txt”
First owners.txt is created as it is required by the file resource dependrequire.txt. Then dependrequire.txt is created.

Thursday, August 14, 2014

Daily Puppet -2



It has been a while since I wrote something on this blog(Not that there are people expecting something in here). However, it is mostly for my reference. Puppet is something which I am interested in learning and making amends since sometime to learn it. I did some learning and later dropped it like many other my learning projects.


So, I learnt how to create Users and Home directories.

As usual I have revise the basics.

The resources are manifests, can be a USER, SOFTWARE, SERVICE etc. They have to be defined in the manifests folder of their own for their better manageability.

/etc/puppet/modules

root@ubuntu:/etc/puppet/modules# ls -l

drwxr-xr-x 3 root root 4096 Jun 9 19:25 homedirs
drwxr-xr-x 3 root root 4096 Jun 9 19:25 users


As shown in the above snippet, two modules have been created.

Each of this module folders will have a manifest folder within it






A very good demo is here,

this blog has wonderful crisp explanation

Let us examine on init.pp file

root@ubuntu:/etc/puppet/modules/users/manifests# pwd
/etc/puppet/modules/users/manifests

cat init.pp
class users {
group {'edinhazard':
ensure => present,
}

user {'edinhazard':
ensure => present,
gid => 'edinhazard',
shell => '/bin/bash',
home => '/home/edinhazard',
managehome => 'true',
password => '$1$CcKAjalB$nUN4y42rmL5ptKs6413Id0',
}
}

this init.pp file first defines a class called “users”

within the class “users”, we define objects that can be assigned to a user.



Next we look at another module which is “homedirs”.

root@ubuntu:/etc/puppet/modules/homedirs# cat manifests/init.pp
class homedirs {
file{"/home/edinhazard":
ensure => "directory",
owner => "edinhazard",
group => "edinhazard",
}

}

here we are making sure the home directory created for user 'edinhazard' has the right permissions.

Once defining these how do we call them to be installed on the nodes?

there is a file in the “/etc/puppet/manifests” folder named 'nodes.pp'. Assign the modules to the node 'desktop' as shown below.

root@ubuntu:/etc/puppet/manifests# cat nodes.pp
node 'desktop' {
include users
include homedirs
}


Now we apply the configuration on the client, i.e., “desktop”

From the CLIENT Node, we run puppet agent --test

puppet agent --test
Info: Retrieving plugin
Info: Caching catalog for desktop.home
Info: Applying configuration version '1402368999'
Notice: /Stage[main]/Users/Group[edinhazard]/ensure: created
Notice: /Stage[main]/Users/User[edinhazard]/ensure: created
Notice: Finished catalog run in 0.71 seconds


Lets now see, if the desired user has been created,

cat /etc/passwd | grep edin
edinhazard:x:1001:1001::/home/edinhazard:/bin/bash

root@desktop:/home# ls -l | grep edin
drwxr-xr-x 2 edinhazard edinhazard 4096 Jun 22 11:59 edinhazard

So, puppet did what has been asked to do. This is a simple configuration applied

Syntax reference

http://www.puppetcookbook.com/posts/create-home-directory-for-managed-users.html 




Monday, June 23, 2014

Daily Puppet - 1

Puppet is an open source framework and toolset for managing the configuration of computer systems.

From sometime, I wanted to learn some tricks on Puppet.This weekend I finally made up my mind and did some very basic stuff with puppet

My goal was to push a file resource from the puppet server to the client. I used the Ubuntu 12.04 machines for testing. If one doesn’t want to install the whole thing then, pre-configured VM can be downloaded from http://info.puppetlabs.com/download-learning-puppet-VM.html

So, lets start.

Installing Puppet master on Linux

aptitude install puppetmaster

Installing Puppet client



aptitude install puppet

Connecting the Puppet Client to the Server


puppet cert –sign “desktop”

We have to sign the client


puppet cert –sign “desktop” Output : notice: Signed certificate request for my-desktop notice: Removing file Puppet::SSL::CertificateRequest my-desktop at ‘/var/lib/puppet/ssl/ca/requ

Creating the First Manifest

On the server move to folder /etc/puppet/manifests, create a “node.pp” file,


gedit nodes.pp node ‘my-desktop’ { file { “/tmp/hello.txt”: content => “Hello, My First Manifest”, } }
Create “site.pp” in the same folder


gedit sites.pp import ‘nodes.pp’ $puppetserver = ‘ubuntu.home’
From the client run the command


puppet agent –test Output : info: Caching catalog for my-desktop info: Applying configuration version ’1370675206′ notice: Finished catalog run in 0.03 seconds
After this when we check the /tmp/ folder , we can see the file is in tmp folder on the client.

What Just happened ?



I am a beginner on the puppet. My Explanations may not be right, but these are my observations.


We installed a Puppet Master server

We installed a Puppet client

Added the client to the Puppet Masters visibility,i.e. server-client interaction

Created first Manifest. A “Manifest” is a puppet’s way of telling the configuration information to the client.Example :

If we see the nodes.pp file, there is a Resource, which a “File Resource”, called “hello.txt” defined for a client “my-desktop with the content “Hello, My first Manifest ”. A resource should be applied to the client when client requests it. This can be a package or file etc. (this case it is a file).

When puppet client contacts the server for the resource, it will get the file in its “tmp ” folder. (Resource is “/tmp/hello.txt”)


So for now, this just a beginning of my date with Puppet.

Thursday, January 16, 2014

Logical Volume Management - Extending root file system online in Linux

Situation : 
On a Linux server, the root file system got full, as it was running some application which is prone to create large number of files. 

If Linux system Partitioning is LVM based then it is very easy. 

Few checks before will certainly help. 

Determine the Volume Group you want to extend and see if the mount point exists on the logical volume belongs to the VG you are extending.  

Steps : 
  • Create a Physical volume 
  • Add it to the Volume Group, aka, VG
  • Extend the logical volume
  • Re-size the file system online
Example : 
Physical volume creation 
            pvcreate /dev/sdb

Adding it to the Volume Group 
            vgextend vg_livedvd /dev/sdb

Extending the Logical volume 
            lvextend -L +30G  /dev/vg_livedvd/LogVol00

Re-size the file system online 
            resize2fs /dev/mapper/vg_livedvd-LogVol00