Donnerstag, 8. Dezember 2011

Dictionary in a Box

Hi girls and guys,

This time just a short trick. If you need a good dictionary
you can use dict based for the shell too. I installed
ding via zypper

zypper in ding

and added this function to my .bashrc in $HOME

translate()
{
clear
tr \| \n < /usr/share/dict/de-en.txt |grep --color $1
}


this way you can type

translate hello

in a bash and get a colorized output of the translation. Hope you find this a useful hint.

You may also use one of these dictionaries.

Until next time than.

Donnerstag, 10. November 2011

Run Jobs asynchronous with anacron

Hi Girls and Guys,

today i will tell you something about anacron.

With anacron you are able to install and maintain a Crontab which runs periodically but asynchronous. Now what does this mean?

Imagine you have a Job for a laptop which you want to execute on regular basis but because you don't know on which times your computer actually is turned on and therefore it might be difficult and even impossible to setup such jobs on fixed times. Anacron does not assume that the machine you are working on is running 24/7 like a productive server.

I will guide you through setting up a job for your anacron having a virusscan with avgscan but first let me tell you a little bit more about anacron: First of all anacron is not an other cron daemon. Rather think of it as an extension to the already installed Cron Daemon. If you first

installing ancron for your distribution anacron sets up a script in /etc/cron.hourly to startup /usr/sbin/anacron every hour reading and executing the commands you have specified in your /etc/anacrontab. In openSUSE you can install anacron using zypper. Simply type:

zypper in cronie-anacron

as root to install it.

Now lets take a look at the /etc/anacrontab:

++++++++

SHELL=/bin/sh
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
RANDOM_DELAY=45
START_HOURS_RANGE=18-23

#period in days delay in minutes job-identifier command
@daily 30 cron.avg_l1zard avgscan -W /home/l1zard/.wine -awbHpPcmj -r /var/log/avg_l1zard.report /home/l1zard/

@daily 30 cron.avg_tmp avgscan -awbHpPcmj -r /var/log/avg_tmp.report /tmp/
@daily 30 cron.avg_root avgscan -awbHpPcmj -r /var/log/avg_root.report /root/

++++++++

The variable SHELL tells the anacron program which Shell is going to be used to execute the commands. The PATH variable is like the environment variable PATH in bash which tells the shell where to look for executable files. With MAILTO you can specify a list of users which will receive a system informing these users about success or failure regarding these jobs. With RANDOM_DELAY you can specify the maximum range of the random delay which is added to the base delay setup in column 2. The START_HOURS_RANGE is the most interesting part of this configuration file. Here you can setup the lapse of time per day in which anacron should run. This is useful if you use commands which are using a lot of system resources and you want anacron to this jobs in a time where you can effort such loss of performence to other tasks.

Now lets have a look at the more interesting table. The first column you can specify the period in days on which you wish running the jobs. You may also use macros such as @daily @monthly or @weekly to run jobs on daily, monthly or weekly basis. In the second column you say anacron the base delay. Anacron waits this amount of time plus the RANDOM_DELAY before it starts any jobs. This way you can be sure that anacron will give you some time to do other jobs after you have logged in. The 3rd column is the name of the spooler file and the id of the job. You will find a file called /var/spool/anacron/job_identifier.The last column of the jobtable is the job including all options and parameters given to this command. However you must ensure that the program you want to run is in the PATH of your configuration file.

So lets test our anacrontab as a final step. Executing

anacron -T && echo "ok"

will test the for us whether the anacrontab we just installed is valid. And with

anacron -fs

we can test whether the commands we have are doing what we want them to do for us. The -s option is used to tell anacron to execute jobs in a row rather than all together. Remember you get a mail where you will be informed about success or failure of the jobs you want to have anacron executed.

That's it for today folks.

Mittwoch, 26. Oktober 2011

No I said this Way! IO Redirection

One of the most powerful tools in the Linux or Unix Scripting is that you are able to redirect StdIn StdOut and StdErr or even other Channels of IO Inter Process Communication
from and to Other Descriptors.

There are basicly 3 of them as i described above:
0 or the StdIn is the Standard Input descriptor.
1 or the StdOut is the standard output descriptor.
2 or the StdErr is the standard error descriptor.


You may use other descriptors like 3 or higher up to 255 for programming or scripting purposes, i.e when creating a script that uses zenity or dialog. Using

ls /dev/fd/

will reveal all descriptors you have available in the current shell.

Redirecting them is often necessary to get rid of nasty output or read from a file to read the output later as the output goes faster than your eye.
However you have to make sure that the noclobber option of bash is disabled to overwriting
existing files. Check this by using:

echo $SHELLOPTS

If you see noclobber here in the output you can disable this by using:

set +o noclobber

Basically you have these redirection options for the program myprogram

(1) myprogram > Output.log redirects StdOut to the file Output.log

(2) myprogram >> Output.log appends hate StdOut to the file Output.log

(3) myprogram 2> Output.log writes the StdErr to the Output.log. Respective the same for >>.

(4) myprogram &> Output.log writes StdErr and StdOut to the Output.log file.

Some administrators still using the older and more complected to write
myprogragramm > Output.log 2&>1 .
These Construct basically appends the Stderr to StdOut
(2&>1) and than redirects both to the Output.log. Note that bash executes the descriptor redirection from the right hand side to the left hand side. In our example this was the redirection for StdErr appending to StdOut. You can use whatever Descriptor you want N&>M will always concatenate the descriptor N to descriptor M.

(5) myprogram < Input.lst the program reads its input from Input.lst

Remembering what i was writing about the noclobber option. Yeah actually i lied about this. You can force IO-Redicrection by using the >N| construct.

(6) myprogram >2| Output.log, will force the redirection to Output.log

Sometimes, if you write a demonized script for example you will find it useful l to close certain descriptors:

7) 2<&- will close the Input for StdErr.

And guess what

(8) 2>&- does? It closes the Output for StdtErr.

Again this does apply to any file descriptor you may bring to live or that already exists.

Here are two examples what else you can do with IO-Redirection:

If you don't have an Editor you may "create" one by using

cat > My.txt << EOF
foo
bar
EOF

to write foo and bar to the file My.txt. Clearing a file is often done by using
cat /dev/null > Output.log But using:
:> Output.log
is faster to write and does the same. This is because the : applies to nothing in bash. and :> overwrites the file with nothing.

Want Real Logging?

However if you want real logging in a script you have written you may want to use logger and create a proper facility which can be used by syslog or syslog-ng.

logger Script.Err mydaemon.sh


will send logging information to the syslog daemon. You can configure the logging target by editing the /etc/syslog.conf to determine the logging target.

Montag, 24. Oktober 2011

The Art of Human Hacking

Hi Girls and Guys,

I just read this book:
http://www.social-engineer.org/social-engineering/the-art-of-human-hacking/

Not that i will encourage anybody to use these Techniques. But to be prepared against the weakest peace in a chain, the human is the best way to protect your infrastructure and yourself from security flaws. The CCC has some good videos form various congresses showing how easy it is to take over someones infrastructure:

here
and here

Have fun

Sonntag, 16. Oktober 2011

Good Documentation Praxis

UPDATE (16.10.2011)

Hi girls and guys,

whenever there is something to reinstall or to recover there might be some additional tasks to fulfill to get everything back in the state it's supposed to be. Sure backing up a system often can safe you a lot of effort, but what's also necessary is to document your system and keep track of the things you are changing from the point on you are going away from a fresh vanilla setup.

There are several reasons why you should doing this and i will list them shortly:

Fixing Issues can often be a lot easier when you know exactly what has changed, which software was installed or upgraded recently and which configuration settings have changed.

Along comes the time where you have to get the hard track of a task that is very complex and a good documentation of your tracks can save you a lot of time.

On such systems where more than one administrator does the job documenting for others, so they are able to follow the steps your were working on, makes life for everyone a bit nicer.

Also it can be a good method to write down what you want to implement/deploy later on for your system.

So that nails it down to 4 very simple document entries in my personal Wiki:
Bugs --> where you describe errors and unusual behavior for yourself and others.

Howto
--> where document stepwise what you are doing to fulfill a major task such as setting up a Web server for example.

Last action taken --> where you document whenever you change something on the system. Here you should also document whenever you update certain packages or change even big things.

Todo --> Tasks you not yet have managed to do, because of lack of time or maybe because you need to inform yourself how to get this done.

You can use a Web server based Wiki but you may want to give zim or rednotebook a try to accomplish this.

But using a wiki is only half the true. You also should comment your work. That means whenever you change a line in a configuration file, document it and whenever you are writing a programm or a script comment what the lines are meant to do, especially those line where you where sitting hours figuring out how to accomplish a particular task.

The next step is what is called a versioning system for your /etc folder. You may want to use the etckeeper. Since i use git for developing, i will use
git as version controling with etckeeper. I am not going in detail how to use
and install etckeeper since there are several blogs out there discussing this topic.

http://bryan-murdock.blogspot.com/2007/07/put-etc-under-revision-control-with-git.html
http://www.jukie.net/~bart/blog/20070312134706


You also should keep an eye on the history command. Some Editing may safe you time and work
in the future. So go on and edit your local or your global bashrc...
First expand your history from the default size of 1000 to maybe 100000 by changing or
definig these variables in your bashrc

HISTSIZE=1000000
HISTFILESIZE=1000000

If you want knowing when a specific command was executed, you may want to set the HISTTIMEFORMAT varibale, which is normaly not set. For example i set this to:

HISTTIMEFORMAT="%F-%M-%S --> "

So my History output looks like this:
996 2011-09-30-05-08 --> pull -u
997 2011-09-30-05-08 --> git pull -u
998 2011-09-30-05-08 --> cd


Preventing duplicated lines can be archived by
HISTCONTROL=ignoredups

Also you may want to set the HISTIGNORE variable:
HISTIGNORE="su *":"sudo *"

to avoid that somebody can determine which command were exacted by root.

that's all for today

Mittwoch, 27. Juli 2011

My Backup Strategy Part I

today i will tell you something about the most important tasks of maintaining a system up and running. Today i will show you my backup solution.

In fact my backup solution does not involve technology you will find on server infrastructure such as raid or lvm. I was able to get all my system and private data backuped without this sophisticated technology. The reason why i don't use backup with raid and lvm is, that raid does not prevent you from doing stupid things. For example when you delete data on a RAID 1 array you also delete this data from all your backup disks as well.

I use only scripts and cron jobs and couple of extra hard disks. First thing i build in an extra hard disk of the same size which is mounted to /mnt/Backup.

This disk could be bigger than the disk where your operating system is sitting but it should not have less space.

For the systems backup you should realize that you do not need to save the data which were installed by rpm or debian packages. Instead to save time and space only save those files which are different from an installed or not installed by a system package. In suse you can use the yast backup tool which does exactly this. In debian you can create an iso image of installed packages from aptoncd program and find those files not installed and save this into a tar.gz file as the yast backup tool does.

With Yast you are able to easily set up a automated cronjob running once a week on friday 22 pm for example. It will save both the packages installed on your system and the list of packages you have installed. If you prefer a more stable backup you could use the script package_state and use the -s option to save the current state of packages in a tar.gz file.

In debian you can archive this by using:

dpkg -l|grep '^ii.*'|awk '{print $2}' >> installed.lst

apt-get install --download-only --from-file=installed.lst --targt=/Your/desired/directory

tar cvfz backuped_packages.tar.gz /Your/desired/directory

mv backuped_packages.tar.gz /mnt/Backup/system/

But i am not an Debian Expert. So there might be other ways for accomplishing this task.

Also i always make a copy of the whole /etc directory just in case. You could use etckeeper on debian systems. On rpm systems you have todo this task with the copy command:

cp -av /etc/* /mnt/Backup/system/etc/

With this solution when the backup disk fails you simply plug in a new disk a make a new backup. when the main disk fails, where your system is running on you plug in a new disk for system and recover your system from the packages you have previously installed.

And copying back the /etc directory and the files which were not installed
by any package.

Now for the users home directory i recommend to use separate /home partition so whenever installing a new system you dont need to delete these files. You can even backup the users files and settings wiht my rsync wrapper script
backup_userhome. You clearly need rsync to be installed. Afer that you can call the script with:

backup_userhome /mnt/Backup User

And restoration is just the other way around with restore_userhome

restore_userhome /mnt/Backup User

Be sure you delete all data from users home which where deployed during the insatllation, then have the restore script running and then login for the
first time. et vola all is back where it should be. you even can visit the files you were last working on by using the places in gnome.

thats it:
Part II will be written when gnu hurd ready or even earlier ;)

Freitag, 15. Juli 2011

LinuxWissen

Update:
Das Linuxwissen ist auf github umgezogen: https://github.com/tuxlover/LinuxWissen


Ich habe nun begonnen mein Linux Sheet in html zu gießen.
Dank zim konnte ich es relativ einfach bewältigen.
Es ist zu ca 40% fertig und enthält auch einige Verbesserungen gegenüber der Textvariante, die auch weiterhin verfügbar bleibt aber nicht mehr aktuallisiert wird.

Ich werde versuchen es relativ regelmäßig upzudaten. Eine finale Version kann es aufgrund der Komplexität natürlich niemals geben.

http://propstmatthias.bplaced.net/LinuxWissen/

Samstag, 11. Juni 2011

removing U3 cd image from usb stick

Hi guys and girls,

i finally found out how i can get rid off this enoying U3 partition from those sandisk
Usb Sticks. Just get the u3-tool and issue

u3tool -p 0 E /dev/sdX

where X is you device. Be certain you have a backup since this could destroy your data.

Donnerstag, 14. April 2011

S.A.K.C introducing SUSE Automatic Kernel Compiler

hi girls and guys,

in one of my lasts posts i briefly wrote about how you actually can install the latest kernel from Kernel:HEAD repository for your current release of opensuse using just the kernels src.rpm file.

Especially the last steps where you actually install the kernel and have to create a proper initrd as well as proper entries into /boot/grub/menu.lst are quiet tricky and messing them up may result in a system which no longer boots.

Spending some time in the opensuse forums i found this script. I also packaged an rpm in my :home Repository. So all you have to do now is to simply apply the patches and retar bz2 the sources and run the script.

If you get the Scripts from my repository you will find the script sakc and klist in the /usr/bin directory. I've modified them a little. Go to the ~/Kernel directory (if you don't have one create it) and type:

sakc ~/Download/linux-2.6.39.rc3.tar.bz2



The Script will unpack the sourcefile for you and configuring your kernel using the /proc/config.gz file. If you want to configure the kernel before sakc compiles it, you can do so. The script will ask if you want to configure it and is using make menuconfig. So you can spend some time to configure the kernel properly according to your needs.

Leaving the kernels configuration menu begins sakc to compile the kernel. The good thing is that the script determines how many cores you currently have and according to this uses the -j option to optimize the kernel compiling process.



After compiling is done you will be prompted for the root password, so don't leave the computer unwatched, and the script will not just install the kernel and its modules but also will create a valid initrd and a entry in
the /boot/grub/menu.lst

Donnerstag, 24. März 2011

Putting users configuration under Git version control

Hi girls and guys,

today i want to show you something which a friend of mine
Reiner Herrmann pointed out for me.

There are two main reasons why you want to put your configuration under version control. You can document what you are changing and when you are changing files like ~/.bashrc or ~/.vimrc for example. At least those are the files which changes quite often on my machines. And the second reason is to simply providing an easy way to restore your configuration when things became messed up.

First make sure you have installed git. This should be in the main repository of your distribution. For Debian simply install it using apt-get install git.

I am now going to show you how you can put your ~/.bashrc under version control
using git.

If you are not familiar with git simply go to the website. You find all documentation
you will ever need on this website.

Let's assume for now i am under /home/matthias. Create a Directory ~/.local_gitrc

mkdir ~/.local_gitrc

and create the Git Repository by navigating in this particular directory you've just created. Here you can initialize your git Repository by typing:

git init

That's all. Your Git Repository have been initialized. Next you copy the original ~./bashrc to this
directory

cp ~/.bashrc bashrc

and add this to your git directory blob

git add bashrc

Make your first commit.

git commit -m " bashrc: initial commit"

Now as the final step link the Script in Your Git Repository to it's original location.

ln -sf /home/mattias/.local_gitrc/bashrc /home/matthias/.bashrc

There have been a lot of suggestions around the internet that you should be able to do this with the
/etc directory too. The result is a a script called etckeeper.



Thats it.

Mittwoch, 9. März 2011

Mein Damen und Herren ich präsentiere

Das Linux Sheet das über die 8 Monate Lpic1/2 entstanden ist.
Das Sheet ist natürlich noch lange nicht fertig. Verbesserte Versionen folgen demnächst, da es als erste Version die offiziell im Web stehen darf trotzdem noch eine Menge Fehler enthält und einiger Ergänzungen bedarf. Ich hoffe dass es irgendjemanden etwas nützt.

Übersetzung in anderen Sprachen erwünscht.
PLease translate into other languages.

Mittwoch, 2. März 2011

Setting up XWiki on Opensuse with mysql and tomcat

Hi girls and guys,

After spending hours with my setup of XWiki on Opensuse 11.3 i finally managed to fulfill this task.
Since it is pretty easy to install but has a lot of edges to deal with, i will detailed instruct you how to get this wiki up and running.

I only describe how to do this on opensuse. If you are looking for Debian just go ahead reading here

(1) First install these packages using zypper:

zypper ref && zypper in mysql-community-server mysql-community-server-client tomcat6, tomcat6-el-1_0-api tomcat6-jsp-2_1-api tomcat6-lib tomcat6-servlet-2_5-api
jakarta-commons-collections-tomcat5 mysql-connector-java java-1_6_0-su
n

(2) Now copy your XWiki-enterprise-web-A.x.n.war to /srv/tomcat/xwiki.war and start the tomcat6 Server by issuing the following:

service tomcat6 start


(3) After waiting a couple of seconds this command unpacks your war archive and you should end up having a directory xwiki under /srv/tomcat/

Stop the tomcat Server by doing a

service tomcat6 stop

(4) We need to do some changes to the particular init scripts. First edit the /etc/init.d/tomcat6 Script and put

TOMCAT6_SECURITY=no


under the line with
PATH="/bin:/sbin"

and save the file. Next is the /etc/init.d/mysql. Find the line in the case constructor that says

echo -n "Starting service MySQL "


and add the following option line
--max_allowed_packet=32M \

!!!The \ in the end of the line is important here!!!

(6) Next create the database for your xwiki and grant the permissions by executing the follwoing

service mysql start && mysql_secure_installation
(this will finalize the installation of mysql for opensuse)
mysql -u root -p -e "create database xwiki"
(this will create the database xwiki)
mysql -u root -p -e "grant all privileges on xwiki.* to xwiki@127.0.0.1 identified by 'xwiki'"
(this will grant the permissions to Xwiki)

(7) Copy the mysql connector library from /usr/share/java/mysql-connector-java-5.1.6.jar to
/srv/tomcat6/webapps/xwiki/WEB-INF/lib/. A Symbolic link may also work. But i didn't try this out.

(8) You must now edit the /srv/tomcat/webapps/xwiki/WEB-INF/hibernate.cfg.xml file. And
comment out other database configurations you don't need. Especially the one hsqldb which is the line that is not comment out by default. Instead uncomment out the mysql section.

(9) Now you can restart the servers by issuing the following commands:
service mysql restart && service tomcat6 start

(10) You should add the servicedaemons permanently to avoid starting them manually each time.
chkconfig -a mysql && chkconfig -a tomcat6

The wiki should live on localhost:8080/xwiki. All you have to do now is to apply the xwiki-enterprise-wiki.A.x.n.xar file. But this should be no problem if you followed these steps.

You can get the latest XWiki and further Instructions here:
http://www.xwiki.org/xwiki/bin/view/Main/Download

Dienstag, 8. Februar 2011

Testing Gnome3


Hi guys and girls.

this time i tried out the upcomming gnome3 desktop for you. Gnome3 will be released if there is nothing standing in the way, in april 2011. First i was really sceptical about gnome3 relasing relatively soon after it was official annunced. You know, will it be the same fail as kde4 was. These kind of thoughts. But than i got really impressed. If you want to make the experience yourself, you are welcome to do so. Just download the iso here and get started.



Gnome3 did not just make a new release with all the new blinking and shining. They really did think about how to make a new dektop experience and how to design new interaction concepts and making things better and more userfriendly.

I would like to describe it this way: Think of a new way to organize and work with the desktop. The desktop no longer stands in the way when you want to work with applications. And applications no longer stands in the way when you want to organize your dektop.

When you click on the upper left side of the scren you can swith from running applications to organizing your desktop. In the activity panel you can now see what applications you are running. You can than move the windows across other virtual desktops which in gnome3 you are able to dynamicly add or remove by moving the mouse to the right side and clicking on the appearing plus or minus buttons.



You can also search for applications and start them right away or you click on the applications right next to the Windowbutton which leads you the gnome menu where you can as usual go through and select the application you
want to start. If you want to switch back to the applications just click of one of the application windows and you can continue working in with this program using a nearly fullsized window.

Your personal preferences can be set by cklicking on your user name on the top left side. you also will find your open chats and incomming messages. Surely this was adopted from ubuntu but has also greatly improved.

One last think to say. Setting up a personal desktop background is not working yet. You must install the xdg-user-dirs using

zypper in xdg-user-dirs

and Place the files in your ~/Pictures directory after logging the user out and in aagain. And sure there are still a lot of bugs and the preview should not be used in a productive environment but i hope most of them will be gone when the final version is out.