Debugging Monit Start and Stop Actions

I was recently trying to do some tricky start and stop commands with Monit http://mmonit.com/monit/.  Unfortunately while Monit itself can log to syslog, it doesn't output anything from attempts to start and stop the applications.

The stripped down environment that Monit spawns can be problematic and getting it to cough up whats wrong can be frustrating.

Based off of an answer to a similar question on stack overflow http://stackoverflow.com/questions/3356476/debugging-monit by billitch I think I came up with a good way to keep tabs on whats happening inside those Monit start and stop commands.  A wrapper script pipes both standard output and error to syslog.

I created two shell scripts in /etc/monit, one for full debugging, and one for minimal extra output and ongoing use just in case a problem crops up, you can check your syslog and see what bad thing went down.

My original scripts also preserved the exit code of the command under test but apparently Monit doesn't give a whit about what the command exited as; so I've presented the simpler script here.

/etc/monit/modebug

#!/bin/sh
    {
     echo "MONIT-WRAPPER date"
     date
     echo "MONIT-WRAPPER env"
     env
     echo "MONIT-WRAPPER $@"
     $@
     R=$?
     echo "MONIT-WRAPPER exit code $R"
    } 2>&1 | logger

/etc/monit/morun

#!/bin/sh
    {
     echo "MONIT-WRAPPER $@"
     $@
     R=$?
     echo "MONIT-WRAPPER exit code $R"
    } 2>&1 | logger 

This is an example Monit script, showing execution of both scripts.

/etc/monit/conf.d/dk-filter.monit

check process dk-filter with pidfile /var/run/dk-filter/dk-filter.pid
      group mail
      start program = "/etc/monit/modebug /etc/init.d/dk-filter start"
      stop  program = "/etc/monit/morun /etc/init.d/dk-filter stop"
      if 5 restarts within 5 cycles then timeout
      if failed unixsocket /var/run/dk-filter/dk-filter.sock then restarter.sock then restart 

Bundler version 1.0.0.rc.5 Full Service Capistrano Task

I've recently been working on updating a set of Ruby applications and converting all of the gem dependencies to use Bundler (gembundler.com)

I've added to my optimized Capistrano Bundler tasks to include handling the prequisite rubygems version and providing a number of customization parameters.

Bundler 1.0 now includes a basic "bundle:install" Capistrano script http://github.com/carlhuda/bundler/blob/master/lib/bundler/capistrano.rb, but since mine covers a bit more ground than the default one I'm continuing to build my scripts around the following Bundler tasks.




Capistrano::Configuration.instance(:must_exist).load do


  desc "Add deploy hooks to invoke bundler:install"
  task :acts_as_bundled do
    after "deploy:rollback:revision", "bundler:install"
    after "deploy:update_code", "bundler:install"
    after "deploy:setup", "bundler:setup"
  end
  
  namespace :bundler do
    
    set :bundler_ver, '1.0.0.rc.5'
    set :bundler_opts, %w(--deployment --no-color --quiet)
    set(:bundler_exec) { ruby_enterprise_path + "/bin/bundle" }
    set(:bundler_dir) { "#{shared_path}/bundle" }
    set :bundler_rubygems_ver, '1.3.7'
    set(:bundler_user) { apache_run_user }
    set :bundler_file, "Gemfile"
    
    desc "Update Rubygems to be compatible with bundler"
    task :update_rubygems, :except => { :no_release => true } do
      gem_ver = capture("gem --version").chomp
      if gem_ver < bundler_rubygems_ver
        logger.important "RubyGems needs to be udpated, has gem --version #{gem_ver}"
        gem2.update_system 
      end
    end
    
    desc "Setup system to use bundler"
    task :setup, :except => { :no_release => true } do
      bundler.update_rubygems
      gem2.install_only "bundler", bundler_ver
    end
  
    desc "bundle the release"
    task :install, :except => { :no_release => true } do
      bundler.setup
      
      #Don't bother if there's no gemfile.
      #optionally do it as a specific user to avoid permissions problems
      #do as much as possible in a single 'run' for speed.
      
      args = bundler_opts
      args << "--path #{bundler_dir}" unless bundler_dir.to_s.empty? || bundler_opts.include?("--system")
      args << "--gemfile=#{bundler_file}" unless bundler_file == "Gemfile"
      
      cmd = "cd #{latest_release}; if [ -f #{bundler_file} ]; then #{bundler_exec} install #{args.join(' ')}; fi"
      cmd = "sudo -u #{bundler_user} sh -c '#{cmd}'" if bundler_user and not bundler_user.empty?
      run cmd

      on_rollback do
        if previous_release
          cmd = "cd #{previous_release}; if [ -f #{bundler_file} ]; then #{bundler_exec} install #{args.join(' ')}; fi"
          cmd = "sudo -u #{bundler_user} sh -c '#{cmd}'" if bundler_user and not bundler_user.empty?
          run cmd
        else
          logger.important "no previous release to rollback to, rollback of bundler:install skipped"
        end
      end
        
    end

  end
  
end


This code uses some other dependencies that I use throughout my Capistrano scripts, namely the customized Gem2 plugin from vmbuilder_plugins. This following code necessary to monkey-patch the Gem2 plugin, which is bundled with Mike Bailey's deprec project http://github.com/mbailey/deprec/blob/master/lib/vmbuilder_plugins/gem.rb




module Gem


  GEM_UNINSTALL= "gem uninstall --ignore-dependencies --executables"


  def install_only(package, version=nil)
    tries = 3
    begin
      cmd = "if ! gem list | grep --silent -e '#{package}.*#{version}'; then gem uninstall --ignore-dependencies --executables --all #{package}; #{GEM_INSTALL} #{if version then '-v '+version.to_s end} #{package}; fi"
      send(run_method,cmd)
    rescue Capistrano::Error
      tries -= 1
      retry if tries > 0
    end
  end


  # uninstalls the gems detailed in +package+, selecting version +version+ if
  # specified, otherwise all.
  #
  #  
  def uninstall(package, version=nil)
    cmd = "#{GEM_UNINSTALL} #{if version then '-v '+version.to_s  else '--all' end} #{package}"
    wrapped_cmd = "if gem list | grep --silent -e '#{package}.*#{version}'; then #{cmd}; fi"
    send(run_method,wrapped_cmd)
  end


end


Unlike many examples of Capistrano deploy scripts which are kept inside the app root of the application they deploy, my deploy scripts are kept in their own repository, and they were built to contain all of the "institutional knowledge" of how to interact with all of our server farms and applications.  There is a lot of shared code that would have to be duplicated if I were to try to break the deploy scripts apart, and store them with each application.  There would also be a lot of tasks that don't make sense to live in only one application, some wouldn't have a home in any application.  I've found a couple of patterns that make it easier to hook and unhook code into the main deploy processes, which I may touch on in other posts. 

One of the techniques which is used in the bundler tasks is to separate out the callback hooks into their own top level task.  This allows me to not have to remember all of bundlers individual hooks, I can easily add them to any application that needs them, and omit them from applications that don't.  A second technique that I use is to control the task chains.  I learned early on that you want to be able to call individual tasks for maintenance and not inadvertently trigger a long series of chained tasks.

This is an example of what you might see in one of my application deploy scripts:




  on :start, :only => ["deploy","deploy:setup","deploy:migrations","deploy:cold"] do
    acts_as_bundled
    before "deploy:update_code", "deploy:clear_release_path","deploy:setup_dirs"
    after "deploy:update_code", "deploy:authentication", "git:track", "scalr:on_boot_finish", "git:config", "scalr:on_hostup"
    after "deploy:symlink", "deploy:cleanup"
  end


Notice the acts_as_bundled acts as a nice declarative way to invoke bundlers callback assignments, and that they are only invoked if one of the top level tasks matches the :only clause.

So for example I can do a command like




cap deploy:symlink


Without causing "deploy:cleanup" to be executed.  But when I do:




cap deploy


It will.

Optimized Capistrano Bundler 0.9 installation task

The recent upgrade for bundler to 0.9.3 requires removing any previous bundler gems, and if you use Capistrano or another deployment system this will bite you if you haven't already upgraded.

This task was built for Ubuntu, but should work fine for any bash environment where Ruby and Rubygems are setup properly.


namespace :bundler do
  set :bundler_ver, '0.9.3'
  desc "install bundler"
  task :install, :roles => :app do 
    run "if ! gem list | grep --silent -e 'bundler.*#{bundler_ver}'; then gem uninstall --ignore-dependencies --executables --all bundler; gem install -y --no-rdoc --no-ri -v #{bundler_ver} bundler; fi"
  end
end



Capistrano is a systems deployment tool. www.capify.org

Bundler is the new Ruby dependency management gem that is integrated into Rails 3. http://github.com/carlhuda/bundler

WebROaR isn't ready for production, yet.

WebROaR http://webroar.in/, a Ruby web application server designed to support Rails and Rack based applications recently came to my attention. My currently preferred Rails stack is Apache2 http://httpd.apache.org/, Ruby Enterprise 1.87 http://www.rubyenterpriseedition.com/, and Passenger http://www.modrails.com/, using NewRelic RPM http://www.newrelic.com/features.html for notifications and metrics.  I wanted to see how WebROaR stacked up as a replacement.

I began my investigation the way I do any application. I start by writing an installation script.  My tool of choice for deployments right now is Capistrano http://www.capify.org, and I've recently been interested in Deprec http://deprec.failmode.com/, a framework of recipes built on top of Capistrano specifically designed to install on ubuntu, which by chance, is also my preferred linux distribution. So I decided to extend Deprec with a WebROaR installation recipe.

WebROaR began showing it's relative youth immediately.  I expect to be able to install a production application non-interactively. That is to say, I should be able to script the entire affair by passing values to the installer, configure, and make scripts via the command line.  The current version of WebROaR's install will prompt you for up to four different values and provides no mechanism to create a non-interactive install.

With Capistrano a non-interactive install can be worked around by examining output and responding as a user would with stored answers. Other deployment systems based on shell scripts will find installing WebROaR challenging.

The next issue to deal with is provisioning the application.  There was no mechanism, save rewriting the entire configuration file, to create a new application from Capistrano.  For this reason I left any configuration of the server or applications out of the deprec recipe.

The problem with re-creating the entire configuration via Capistrano is a simple one, when you install an application with Capistrano it doesn't normally have any concept of any other applications that may be hosted by your server.  So a rewrite of the entire config would end up wiping out all other applications unless pains were taken to make it tolerant.

Another sign that WebROaR is fairly young is that when I deployed a simple test application, the deploy scripts hadn't properly used the bundle command to unpack all of the application's dependent gems. It left the application in a really sorry state.  WebROaR's user interface didn't expose the error condition, nor was it able to serve the configured application.  It wasn't until I connected to the box via ssh and tried to run script/server in the applications deploy folder before I got any clue as to what the problem might be.  WebROaR should have done what it could to raise the problem to my attention within its exceptions section. I dealt with several errors where the WebROaR UI wasn't any help which gives me the impression there are many areas where WebROaR is rough around the reliability edges.

It's great that WebROaR's concept is built around interactive management, Apache, Passenger, Mongrel, and Thin don't really provide any instrumentation without something like NewRelic's RPM.

The exception reporting was definately useful, and presented nicely, if not exactly timely.  After triggering a few errors it still took several refreshes of WebROaR's UI before the information was ready to be examined.  Similarly there was a delay from when the first requests were generated before the graphs in the analytics could be reviewed.

Things like email and sms notifications are not in the product yet. I would have expected the ability to automatically send notifications and have some control over the process when metrics go above or beyond certain thresholds, or when certain types of exceptions occur.

The admin interface is protected by only a single username and password, multiple users aren't possible. Obviously role based access controls aren't in the product yet either.

The documentation is rather good for what features there are. But there aren't that many features.

I didn't do any kind of exhaustive performance benchmarks or stress tests because my curiosity had already been sated by getting my test application up. WebROaR doesn't compare to the configurability, stability, and breadth of options that I could even remotely consider it as a replacement for my current Rails Stack, even if all of it's claims about performance are true.

If it's going to become a production ready application, it's going to have to make it effecient for sysadmins to install and manage it from the command line and report error conditions better.  I can absolutely see it's alure to hosting companies as a great way to host rails apps if they can delegate control to end users and use an API to control the configuration. The project is very young, announced November 25th, 2009 according to their Blog, and it's incredibly promising, I'll definately check it out again when they get near a 1.0 release.


Deprec WebROaR recipe: http://github.com/donnoman/deprec/blob/2cdac7ed5c322d41512673a2e199707b3e47de...

Related changes to deprec: http://github.com/donnoman/deprec/commit/2cdac7ed5c322d41512673a2e199707b3e47...

Any fixes will be comitted here: http://github.com/donnoman/deprec/tree/webroar

Sample App Deploy Script using webroar: http://github.com/donnoman/flitten_deploy/commit/76e0ae350d75127fb43e7cc181f0...

Windows XP Caching Nameserver forwarding to Google's Public DNS with support for private wildcard DNS zones

Why on earth would you want to do this?

For a very specific use case.

  1. You do development of a web application locally that needs a wildcard domain name.  ie: where you want 192.168.1.1 to answer any http request for *.example.com ( example.com, www.example.com, blah.example.com, ridiculously.long.example.com) without specifically configuring each name in a hosts file.
  2. You have previously setup your workstation to use Google's public DNS and don't want to lose the benefits by setting up your own nameserver ( Also applies if you forward to ANY upstream DNS servers, just swap the Google IP's out with the ones you want.)
  3. You are using Windows XP (This should work on Vista). You can apply the same configuration files here for any version of BIND. ( OSX Users should look at a utility called DNSEnabler that provides a dead simple graphical user interface to manipulate BIND on OSX ), but the instructions steps here are specific to Windows.

Why Google's Public DNS?

It's faster than your ISP's default DNS which will make browsing faster.  You can also use these instructions for other DNS networks like OpenDNS, or even forward them back to your ISP if you so wish. http://code.google.com/speed/public-dns/

Why Windows XP?

Because it's what I have at home. I use OSX and Linux at work where I use DNS Enabler http://cutedgesystems.com/software/DNSEnabler/ on OSX and I've previously posted how to configure BIND on Linux to use Google's Public DNS in an office environment, which pretty much covers all my bases.

Why BIND?

It is the defacto DNS implementation, it's well worn and battle tested; and they make a Windows distribution of the software that's freely available.  While I like Microsoft's DNS server, it's only available on their server products, so regrettably it's not an option for Windows XP. 

There are other Windows based DNS servers, but most of them are commercial and are not as clean, cheap, or as easy as DNSEnabler, so it wasn't worth the time to research any of them. I also have experience configuring BIND so I figured I could just share my configuration and hopefully the 2 other people on the internet that have the same needs as I can benefit from my experience.

For more information about BIND: https://www.isc.org/software/bind . A great reference about DNS in general as well as BIND is the venerable book "DNS for Rocket Scientists" http://www.zytrax.com/books/dns/

What are the steps?

  1. Download latest BIND zip for windows: https://www.isc.org/software/bind
  2. Unpack and run: BINDInstall.exe
  3. View: C:\windows\system32\dns\bin\readme1st.txt
  4. Start->Run: C:\windows\system32\dns\bin\rndc-confgen -a
  5. Use Explorer, navigate to c:\windows\system32\dns, right click "etc", properties, security, add, "named", click "full control", OK, OK
  6. Download ftp://ftp.internic.net./domain/named.root and save it to C:\windows\system32\dns\etc\named.root to seed BINDs root hints.
  7. Use Windows Services to start/stop the Named Service
  8. Start -> Right-Click "My Computer", manage , click "Services", Look for "ISC BIND", Right-Click start,stop, or restart.
  9. Start -> Right-Click "My Network Places", properties, Right-Click your active "Local area connection", properties, click "Internet Protocol (TCP/IP)", properties, Use following dns server: 127.0.0.1
  10. For any other system on your network that you want to use this nameserver you would need to use your hosts real IP address for the local network (It may change if your on dhcp, remember to check if you have "internet" problems.) and repeat only step 9 on that system, assuming its Windows.  If it's a Unix platform you would edit /etc/resolv.conf.  If you want to get fancy you can edit dhclient.conf to prevent DHCP from overwriting your custom nameserver selection.
  11. If you customized hosts in your hosts file that will be covered by the wildcard you must remove them: C:\windows\system32\drivers\etc\hosts
  12. If you intend on using the dns server from workstations other than the localhost and you are running any kind of firewall you will need to open up port 53 both UDP and TCP.  For Windows' included firewall Start -> Right-Click "My Network Places", properties, Right-Click your active "Local area connection", Select the "Advanced" tab, settings, Select the "Exceptions" tab, Add Port ( name: DNS, port: 53, type: TCP), repeat last step except for UDP this time.

C:\windows\system32\dns\etc\named.conf is as follows:

options {
  // version statement - inhibited for security
  // (avoids hacking any known weaknesses)
  version "get lost";
  // optional - disables all transfers
  // slaves allowed in zone clauses
  allow-transfer {"none";};
  forwarders {8.8.8.8; 8.8.4.4;}; //GOOGLE Public DNS
  directory "C:\WINDOWS\system32\dns\etc";
};

view "trusted" {
  match-clients { 192.168.0.0/16; 127.0.0.1; }; // any private class c and localhost
  recursion yes;
  // required zone for recursive queries
  // retrieve from: ftp://ftp.internic.net./domain/named.root
  zone "." {
    type hint;
    file "named.root";
  };
  // basic localhost support
  zone "localhost" in{
    type master;
    file "master.localhost";
  };
  // basic localhost support
  zone "0.0.127.in-addr.arpa" in{
    type master;
    file "localhost.rev";
  };
  // this is the wildcard zone
  zone "dev.example.com" {
    // Don't forward queries for this zone.
    forwarders {};
    type master;
    // The final extension of .txt simply is so that Windows doesn't
    // think the file is executable (.com), and will open the file with your
    // systems designated text editor without any fuss.
    file "dev.example.com.txt";
  };
};

view "badguys" {
  match-clients {"any"; }; // all others hosts
  // recursion not supported
  recursion no;
};

C:\windows\system32\dns\etc\dev.example.com.txt is as follows:

$TTL 2d    ; 172800 secs default TTL for zone
$ORIGIN dev.example.com.
@             IN      SOA   ns1.dev.example.com. hostmaster.dev.example.com. (
                        2009120500 ; se = serial number
                        12h        ; ref = refresh
                        15m        ; ret = update retry
                        3w         ; ex = expiry
                        3h         ; min = minimum
                        )
              IN      NS      ns1.dev.example.com.

                              ; Retrieve the IP for the target of the wildcard
                              ; Linux: ifconfig
                              ; Windows: ipconfig
@             IN      A       192.168.1.1

www           IN      A       192.168.1.1

                            ; Retrieve the IP for the DNS server
ns1           IN      A       192.168.1.70

*             IN      CNAME   www

C:\windows\system32\dns\etc\localhost.rev is as follows:

 

$TTL    86400 ;
; could use $ORIGIN 0.0.127.IN-ADDR.ARPA.
@       IN      SOA     localhost. root.localhost.  (
                        1997022700 ; Serial
                        3h      ; Refresh
                        15      ; Retry
                        1w      ; Expire
                        3h )    ; Minimum
        IN      NS      localhost.
1       IN      PTR     localhost.

C:\windows\system32\dns\etc\master.localhost is as follows:

 

$TTL    86400 ; 24 hours could have been written as 24h
$ORIGIN localhost.
; line below = localhost 1D IN SOA localhost root.localhost
@  1D  IN     SOA @    root (
                  2002022401 ; serial
                  3H ; refresh
                  15 ; retry
                  1w ; expire
                  3h ; minimum
                 )
@  1D  IN  NS @
   1D  IN  A  127.0.0.1

To test your new caching dns server for resolving the local wildcard domain: Start->Run: "cmd", then type "nslookup whatever.dev.example.com" at the command prompt.

(If it doesn't work and you can't browse the web anymore, Look at step 9 and set dns back to "obtain DNS server address automatically", or to 8.8.8.8 and 8.8.4.4, to go direct to Google's DNS. You should also look at your Event Viewer to discover any errors, See step 8, except select the "Event Viewer" instead of "Services", then look at the "Application" log.)

The command should return:


Server:  localhost
Address:  127.0.0.1

Name:    www.dev.example.com
Address:  192.168.1.1
Aliases:  whatever.dev.example.com

You can use nslookup to check a few more sites like www.disney.com, or www.yahoo.com; to make sure that the forwarding is occuring.

Your done, enjoy.

tail: can tail multiple files simultaneously, who knew? and other tail tricks.

After using tail for a long time, I've only recently had a need to became familiar with tail's ability to watch multiple files.

you can easily watch a single file, thats the tail we all know and love.

tail -f /var/log/syslog

But I've got another rsyslog directory that concentrates logs from a bunch of different servers with specific naming conventions that I can match by filespec.

For each cluster there are multiple app, database, loadbalancers and memcache servers. trying to debug a problem, I needed to tail all of the app servers at the same time.

it's dead simple, particularly if you are in the directory all the files you want to tail reside.

tail -f *production-app*

Where it matches any filenames that contain "production-app".

If I need to watch the mysql servers of my testing cluster

tail -f *testing-mysql*

Incidentally you can also tail multiple files without using a filespec

tail -f /var/log/apache2/access.log /var/log/apache2/error.log

it tails the files that match the filespec and interleaves the output with markers so that you know which log file you are looking at. Beautiful.


Other tail tricks:

combine with grep to watch for your needle before it gets buried in the haystack:

tail -f /var/log/syslog | grep "my needle"

combine with grep to exclude a bunch of annoying messages that you don't need.

tail -f /var/log/syslog | grep -v "annoying message I don't want to see"

Here's one that I commonly use to cut the cruft out of watching logs on my EC2 instances, it eliminates any lines with Connection OR Kernel in them.

tail -f /var/log/syslog | grep -v 'Connection\|kernel'

Using Google's recently announced Public DNS

http://code.google.com/speed/public-dns/

No forwarders, not previously cached: 259ms.

; > DiG 9.3.4-P1 > disney.com
;; global options: printcmd
;; Got answer:
;; ->>HEADER ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 0

;; QUESTION SECTION:
;disney.com. IN A

;; ANSWER SECTION:
disney.com. 900 IN A 199.181.132.250

;; AUTHORITY SECTION:
disney.com. 86400 IN NS huey.disney.com.
disney.com. 86400 IN NS huey11.disney.com.

;; Query time: 259 msec
;; SERVER: 192.168.250.220#53(192.168.250.220)
;; WHEN: Thu Dec 3 10:52:19 2009
;; MSG SIZE rcvd: 84


Using Google's Public DNS, not previously cached: 120ms.

; > DiG 9.3.4-P1 > disney.com
;; global options: printcmd
;; Got answer:
;; ->>HEADER ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 13, ADDITIONAL: 0

;; QUESTION SECTION:
;disney.com. IN A

;; ANSWER SECTION:
disney.com. 900 IN A 199.181.132.250

;; AUTHORITY SECTION:
. 52428 IN NS M.ROOT-SERVERS.NET.
. 52428 IN NS A.ROOT-SERVERS.NET.
. 52428 IN NS B.ROOT-SERVERS.NET.
. 52428 IN NS C.ROOT-SERVERS.NET.
. 52428 IN NS D.ROOT-SERVERS.NET.
. 52428 IN NS E.ROOT-SERVERS.NET.
. 52428 IN NS F.ROOT-SERVERS.NET.
. 52428 IN NS G.ROOT-SERVERS.NET.
. 52428 IN NS H.ROOT-SERVERS.NET.
. 52428 IN NS I.ROOT-SERVERS.NET.
. 52428 IN NS J.ROOT-SERVERS.NET.
. 52428 IN NS K.ROOT-SERVERS.NET.
. 52428 IN NS L.ROOT-SERVERS.NET.

;; Query time: 120 msec
;; SERVER: 192.168.250.220#53(192.168.250.220)
;; WHEN: Thu Dec 3 10:53:45 2009
;; MSG SIZE rcvd: 255

In this trivial and far from scientifically accurate test it appears Google's DNS is considerably faster, and as long as they can continue to maintain this level of performance the use of their servers will be greatly beneficial to our office network.

We use ISC-dhcpd and BIND on linux servers and configure them on the boxes using vi. There's no pretty Web Based interface on a broadband router here.

If you have a broadband router, these instructions will not do you any good. Instead, your broadband router probably has a barely-usable web interface, you should RTFM.

Our DHCP hands out the addresses for two of our local servers that run BIND because we host several domains internally.

Making the change:

Assume root status on your name-server

sudo -i

Create a time-stamped backup copy of your /etc/named.conf

cp /etc/named.conf /etc/named.conf.`date +%s`

Edit the BIND configuration file called named.conf.

vi /etc/named.conf

Add the following inside the options {...} section

forwarders { 8.8.8.8; 8.8.4.4; }; //Google Public DNS

If you host zones you should exclude them from forwarding

zone "somedomain.com" IN {
type master;
forwarders { }; //don't forward
file "somedomain.internal.db";
allow-transfer {
192.168.0.215;
};
notify yes;
};

Test the new configurations

/etc/init.d/named configtest

Restart Named

/etc/init.d/named restart

Rinse and repeat for each of the name-servers that your DHCP server hands out to your clients.

Get out of root before you screw something else up

exit

Let your office enjoy.