Thursday, October 20, 2011

Fun with Wordpress, Multisite and Permalinks

One of my customers alerted me to the fact that permalinks (such as the "recent posts" section on the sidebar) weren't working on a sub-blog on their multisite installation. Instead they got a "Page not found" error.

I did a quick bit of poking around and found that none of the permalinks on any of the sub-blogs were working. If I hard-coded a non-permalink URL (http://www.thesite.com/sub-blog1/?p=123, where "123" was an actual post ID) then I could get to the post just fine.

This seemed like a straightforward problem, but "wordpress multisite permalink" didn't give me an immediate hit. It took a little digging before I found this post. And even then, the solution was hinted at way at the bottom of the thread.

Did you activate the plugin for each site?
Have you resaved permalinks on the subsite?
Well, I didn't have a plugin that I thought was the culprit. But saving the permalinks on the site?

I checked one of the sub-sites and permalinks were set correctly. BUT, I've seen stranger things in 23 years, so I hit "Save Changes" on the permalinks page anyway.

Well what do you know? It worked.

Not going to ask why or how. Just tucking this one away for the future and moving on with my life.

Monday, October 17, 2011

That's 35 in Dog Years

I have never, ever understood the concept of a 5-year plan, nor the need for a certain level of management to spend 1-3 months tuning said plan each and every year.


Wednesday, September 28, 2011

Going with the (net)Flow

If you are using SolarWinds' NetFlow Analyzer modle (NTA) then you might have run into the confusion about all the different settings.

Most people keep them at the default but if you are experiencing performance hits, you will want to see where a tweak here or there might be beneficial.

The problem is (and no disrespect meant to the hard working tech writers at Solarwinds), the options don't make a whole lot of sense at first blush.

What appears below are my notes after about 20 (no exaggeration) email exchanges with tech support to nail down what each of the options means. It also includes what SolarWinds is doing behind the scenes with your data.

To see these options, log in to the regular website as an administrator go to settings (upper-right corner), then "NTA Settings".

“Compress Data” is talking about rolling up the data - averaging the detailed statistics to the hourly.  So the options that apply to this are:

(“Keep Uncompressed Data for...”) 
How long should NPM keep the minute-by-minute data from each data source  (default is one hour). During this time a new table is created for each netflow source every 15 minutes. If set to 60 minutes, you get 4 tables per netflow source. If you had 1,000 netflow sources, that would be 4,000 tables

This can be bumped up to 240 minutes, but doing so will create more tables
  • Once the time limit (again, 60 min is the default) is reached all those detailed values in all those tables are calculated into a 15 minute average. This becomes the database table NetflowSummary1.
  •  Every 24 hours (this is NOT tunable), the 15-minute data is compressed (averaged) into hourly data. This becomes the table NetflowSummary2
  •  After 3 days (again, not tunable), the hourly data is compressed into a daily average, which is moved to the table NetflowSummary3
“Keep Compressed data for”
The daily averages are held for 30 days (this can be held longer), after which they are deleted.

“Delete expired flow data”
The expired data (ie: older than 30 days or whatever you set) is deleted however often you indicate in this setting. "Once a day" is the default

“Compress database and log files”
is a shrink operation. As in, it tells the MS-SQL server to shrink tables. Nothing more exciting than that.

“Enable aggregation of Top Talker data” 
This uses Memory on the primary poller to store a certain amount of Netflow statistics. The web server (either locally, or via port 17777 if you have an additional web server) pulls the statistics from RAM rather than a distinct query to the DB server. This improves the overall load times of the NTA webpages (especially top talker, top conversations, top applications) and has the secondary effect of reducing load on the database server. Of course, any of that is only true if you have a lot of people hitting the NTA pages all the time.

Tuesday, September 27, 2011

Yes yes yes! 100%. Like. Plus. Bump. Retweet. Buzz. Digg. Stumble. Forward.

 

Could someone please forward this to my Mother-in-law?
Thanks.

Tuesday, September 13, 2011

Self-Reliance

There's disaster recovery, and then there's how you recover from a disaster.

No, I'm not talking about Irene. I'm talking about the perfect storm of travel, customer visits, and a crashing hard disk.

It's a familiar story. I mean, hard drives gotta die sometime. That's what the MTBF (mean time between failures) rating IS. And since I use my laptop (yes, the big one) almost constantly,  it was really due to happen any time now.

So when I booted up Ubuntu and it asked me to perform a lengthy fsck routine twice in a row, I knew it was time to take action.

Step 1: Back up all the data I could, to whatever I had handy. Luckily, I carry a Sandisk Cruzer 16Gb flash drive, so I could back up A LOT of my immediately important stuff. I had also backed up my laptop before I left so I knew I wasn't completely sunk, just slowed down.

Step 2: Get a new drive. No problem, that's why God gave us Fry's.

Step 3: transfer the data from the old drive to the new one. I mean, that's the simple part, right?!? You just hook it up to a hard drive replicator (a technology that's been around for years making cloning and other techniques obsolete) and in an hour or two you are good to go.

Right? RIGHT?!?

Apparently not.

My first stop - the internal desktop support folks at my company - was a 3 hour odyssey of getting first Ghost and then "some other program I haven't used much" to run on an old Dell 386 with hand-spliced cables shooting out the front. While I'm sure that setup does work, it didn't like my Ubuntu drive and helpfully failed at the end of the 3 hour copy attempt.

Having given the home-team the chance to prove itself, I went to the experts - those wizards at Fry's - where I was certain they'd be able to get me back on my feet while I leisurely browsed their aisles.

Uh... no. First, I was informed in a condescending tone that what I wanted was called "ghosting" ("Yes," I thought while maintaining a rigid smile. "I remember Symantec Ghost. I also remember Norton Ghost. I also remember PartitionMagic. I also remember using a LapLink cable to provision an entire training room. And I'm also certain that what I want is a clone of my hard drive. But who am I to quibble?")

Second, I was informed that they weren't certain Linux would work correctly if the old drive had bad sectors. ("Weeeeellll, if the drive runs NOW, I am fairly certain it will run after copying it to the new hard disk. I mean, it's not going to DAMAGE the sectors on the new drive, right?")

Third, this was going to cost me $70. Fine.

Finally, it would take 2-3 days.

Okay. Buh Bye.

Taking my leave of the lack-of-service counter, I decided to see if wandering the aisles offered any inspiration. Plus, walking around Fry's always makes me feel better. It just does.

I knew that my laptop had two drive bays, so if I could score some drive rails and a flat SATA cable (as described here) I might be able to set up a RAID 1 setup and just replicate the whole darn thing.

Short story long, they didn't have either the rails or the SATA cable. What they DID have was a $20 SATA-to-USB port connector. Now I could connect both drives, but how to get my whole OS over to the new disk. I didn't want to spend the rest of the night installing all my stuff (not that I had the install disks with me in the first place.)

In researching RAID options, I stumbled upon CloneZilla. A quick CD-burn later, and I was booting to a beautifully Linux-esque system that would let me copy my data from the old drive (now connected via the SATA-to-USB cable) to the new (safely ensconced inside the laptop). The first copy attempt - using default settings - ran for just 5 minutes, but didn't work (to many disk read errors). But the second attempt - which included a pre-copy fsck and was a RAW (bit for bit, no matter what) copy was a complete success.

It took 9 hours to run, but I was able to catch some z's during that time and awoke to a laptop that was actually usable and didn't leave my heart palpitating.

Tuesday, September 06, 2011

Solarwinds: Giving rights to NCM without giving away the farm

This is an enhancement to a thread that originally started on thwack:

Since NPM 10.1.x, everyone has enjoyed the ability to use AD groups rather than individual user accounts. Yay for NPM. But now in NCM, we have to somehow validate all these "new" users in NCM. Users who might not even have logged in yet, because you added an AD GROUP rather than a single account.
  1. To do that, in NPM you have to give the group (or account) "View Customization" right, which ain't gonna happen because then all your users can change anything about any screen anywhere.
  2. Not to mention that NCM doesn't allow you to add AD Groups, so you have to 
    1. Add user accounts individually to the NCM system
    2. OR stick with generic NCM roles and map them for each user in NPM
While I'm hopeful that the next version of NCM  (rumor has it that it will be version 7.0, due out by the end of 2011) will have some improvements to this, we've found a work-around.

This assumes you've set up the generic roles (webviewer, engineer, etc) on the NCM server.

  1. Log onto your SolarWinds website with an account that has “Change View” permissions
  2. Go to the "Config" tab and make sure you are set the credentials to use the account “Webviewer” (with whatever password you gave it in the NCM Console)
  3. Open an RDP session to your NPM server
  4. Start the Solarwinds Orion Database Manager utility
  5. Find the table “WebUserSettings”, right-click it, and choose "Query"
  6. Run the query: “Select * from WebUserSettings where settingname like '%cirrus%' and accountID like ‘%%’”
  7. make sure is the one you used in step 1 above
  8. Click the read-write radio button and hit “refresh”
  9.  Change the AccountID for the 3 settings (CirrusIsFirstTime, CirrusISPassword, CirrusISUserName) to use the user account, in the form:
    DOMAIN\username
    ...or...
    DOMAIN\GroupID
Repeat this step by going back to the Solarwinds website, hitting refresh (you will see that you have to re-enter your credentials; and then going back to the RDP session and hitting refresh and renaming your account again.

Tuesday, August 30, 2011

Third (and final) Post on SolarWinds tricks

(Originally posted on www.thwack.com here)


This is the third in a series of posts where, in the name of giving back to the community, I'm going to share some of the customizations that make SolarWinds a little more robust for us and our customers.

First, a little background about my company and how we use SolarWinds. Sentinel is an IT solutions provider that focuses on communications technologies, Data Center, and Outsourced / Managed Solutions.

One of our key services (and the thing that lets me put food on the table) is a remote monitoring solution (based on SolarWinds, of course). All we have to do is drop a VPN router onto the customer's premises and set up NAT's for the devices they want (read "pay us") to monitor, and we're good to go. This is a perfect fit for our customer base, where they don't want to divert resources for the ongoing investment in staff, software, and skills to set up an enterprise-wide monitoring and management solution (not to mention figuring out who's going to handle all those pesky tickets).

So our model - where we have many independent customers with different sets of values, different monitoring requirements and so on has driven us to come up with some customizations that focus on:
  • How to stop alerting on various devices (because of pilot projects, new customer onboarding, or maintenance windows) while continuing to collect statistics
  • How to set thresholds for devices when that could be different on nearly a device-by-device basis
  • How to ignore alerts based on the built-in monitors for CPU/RAM, etc on older or closed-architecture devices where a custom OID gave better data
This post is going to look at our solution for the third bullet - how to ignore built-in SolarWinds values in favor of custom OIDs. You can find the discussion about the first item here and the second item's information here.

If you've been playing along at home, you now have custom fields and alert logic to mute nodes, interfaces volumes and maybe even specialized items like APM; you have fields (and associated alert logic) to allow custom alert thresholds for CPU, RAM, disk space, bandwidth, and whatever else makes your heart beat faster.

But then you run into a situation where the built-in SolarWinds pollers don't work correctly for a particular device. Of course you can set up a custom Universial Device Poller (UnDP), but that doesn't stop the default poller from spewing false alarms.

We have that situation with a series of old Cisco 6500's where the standard SW poller mis-reports CPU; and on some linux-based appliances where the vendor has locked out the standard linux OIDs in favor of their own - but because Orion detects the machine type as "net-snmp" it attempts to pull CPU, RAM, etc using the standards.

The problem (with regard to the ALERT_CPU, ALERT_RAM, etc, custom fields described in part 2 of this series) is that they are all using the standard CPU_LOAD element to compare against.

Of course, you COULD set the ALERT_CPU to some rediculously high number, and then implement a custom alert. We did, but ran into two problems:
  1. It became difficult to figure out why an alert triggered. We'd see a CPU alert and then notice that the threshold was set to 105%, and things got really confusing until we realized the device in question used a custom CPU OID
  2. Remember those Linux-based appliances I mentioned earlier? On some of them the standard CPU OID reports 200% or more. Which always makes for jolly good times in the Ops center when they see THAT guage on the screen.
So we've implemented OVR_STD_CPU and OVR_STD_RAM fields (both simple Yes/No custom properties) to get around this. Effectively, this tells SolarWinds that a non-standard OID is being used as the key element, and the standard OID should be skipped.

Where ALL of the following are true
  OVR_STD_CPU is not equal to YES
  CPU_LOAD is greater than 90
The complete alert logic (including muting and standard ALERT_CPU) would now look like this:

Where ANY of the following are true
  Where ALL of the following are true
     N_MUTE is not equal to YES
     OVR_STD_CPU is not equal to YES
     ALERT_CPU is empty
     CPU_LOAD is greater than 90
  Where ALL of the following are true
     N_MUTE is not equal to YES
     ALERT_CPU is not empty
     OVR_STD_CPU is not equal to YES
     the field CPU_LOAD is greater than the field  ALERT_CPU
This would ensure that the standard CPU alert would NEVER trigger for  the node in question. Then we can set up a different alert that uses  the custom OID, which uses the existing MUTE and ALERT_xxx logic. Of  course it will only trigger when the custom OID was applied to a node.

Where ANY of the following are true
  Where ALL of the following are true
     N_MUTE is not equal to YES
     OVR_STD_CPU is not equal to YES
     ALERT_CPU is empty
      is greater than 90
  Where ALL of the following are true
     N_MUTE is not equal to YES
     ALERT_CPU is not empty
     OVR_STD_CPU is not equal to YES
     the field  is greater than the field  ALERT_CPU

Thursday, August 25, 2011

The First (and maybe last) Thing I'll Say About Steve

...isn't even my own words.

I've been a big fan of Robert X. Cringely - not his real name, and a name used elsewhere in the IT industry by someone else. But this is the REAL Robert X. The one who used to write the back page of InfoWorld and gave us "Accidental Empires" (the book, the movie!).

Cringely is IT's answer to House. He's not always right, but he's always eventually right. And he's got the experience, insight, and tenacity to know when he's not right YET and to keep digging.

This morning his hastily scrawled blog post ("Cupertineo Two Step") beat out anything else I've seen in the news so far.

The key take-aways:
"he’s not going away, not signing-up for Apple COBRA benefits, just giving up to Cook his duties as CEO. Jobs will remain an Apple employee and chairman of the board.  That makes him what’s called an executive chairman — one who is on the job every day. And that job he’ll be doing every day is overseeing Tim Cook’s execution of the corporate strategy designed by Steve Jobs."
and
"For all his administrative skills, Cook can’t fill Jobs’ visionary shoes, so I’d look for another leadership change, maybe tied to the release of Isaacson’s book. [...] I believe Walter Isaacson’s book will also function as Steve’s technology manifesto, part of his legacy. Once we have the grand plan, then it may make more sense just who should lead that plan’s execution during what will clearly be Apple’s best quarter in its 34 year history.  Steve Jobs is setting-up this (and us) for another grand reveal… just one more thing."

Mailpress and Wordpress Multi-Site

The Problem
I've got a client with a multi-site installation of Wordpress, who decided they needed to email both newsletters and individual blog posts not only from the main site, but from each sub-site as well.

MailPress seemed like the best choice, so I went with it. Installing and (Network) activating the plugin went fine. Setting up the main site went fine. Activating the customized MailPress theme (which was consistent across all sites) went fine.

But when I went to add users, I just flat-out couldn't. I saw the bulk-add box, but nothing else, even after I bulk added a couple of addresses.

The Cause
For whatever reason, MailPress created it's special tables for the main site, but not for the sub sites. These tables included mailpress_users which (as you might guess) hold the subscribed user names.

A Bit of Background
Wordpress multisite takes all the main tables (wp_options, wp_users, etc) and - for the subsites - adds a number. So your first subsite gets wp_2_options, wp_2_users, etc.

MailPress keeps that going but tagging the site prefix onto it's tables. Instead of mailpress_forms, mailpress_users, etc you get wp_2_mailpress_forms, wp_2_mailpress_users, and so on.

Deep in the heart of the MailPress installation there's a file /wp-content/plugins/mailpress/mp-admin/includes/install/mailpress.php. And in that file it indicates which tables should be created (or upgraded). I couldn't figure out what file is supposed to actually launch install/mailpress.php, but it doesn't matter, the commands to create the required tables were there, so I just pulled them out as you see below.

The Solution
If you are having this problem, open your favorite MySQL query tool (it's probably phpAdmin, and you probably launch it from your hosts control panel. And let's face it - if you don't know what I'm talking about at this point, the better part of valour is to find someone who IS comfortable with MySQL and queries.

I even know this guy I'd recommend - his rates are pretty reasonable.

Use the code below, changing wp_2_ to the number of each of your sites until the table are all created.

CREATE TABLE wp_2_mailpresss_mails (
 id                bigint(20)       UNSIGNED NOT NULL AUTO_INCREMENT,
 status            enum('draft', 'unsent', 'sending', 'sent', 'archived', '') NOT NULL,
 theme             varchar(255)     NOT NULL default '',
 themedir          varchar(255)     NOT NULL default '',
 template          varchar(255)     NOT NULL default '',
 fromemail         varchar(255)     NOT NULL default '',
 fromname          varchar(255)     NOT NULL default '',
 toname            varchar(255)     NOT NULL default '',
 charset           varchar(255)     NOT NULL default '',
 parent            bigint(20)       UNSIGNED NOT NULL default 0,
 child             bigint(20)       NOT NULL default 0,
 subject           varchar(255)     NOT NULL default '',
 created           timestamp        NOT NULL default '0000-00-00 00:00:00',
 created_user_id   bigint(20)       UNSIGNED NOT NULL default 0,
 sent              timestamp        NOT NULL default '0000-00-00 00:00:00',
 sent_user_id      bigint(20)       UNSIGNED NOT NULL default 0,
 toemail           longtext         NOT NULL,
 plaintext         longtext         NOT NULL,
 html              longtext         NOT NULL,
PRIMARY KEY (id),
KEY status (status)
);


CREATE TABLE wp_2_mailpresss_mailmeta (
 meta_id           bigint(20)       NOT NULL auto_increment,
 mp_mail_id        bigint(20)       NOT NULL default '0',
 meta_key          varchar(255)     default NULL,
 meta_value        longtext,
 PRIMARY KEY (meta_id),
 KEY mp_mail_id (mp_mail_id,meta_key)
);


CREATE TABLE wp_2_mailpresss_users (
 id                bigint(20)       UNSIGNED NOT NULL AUTO_INCREMENT, 
 email             varchar(100)     NOT NULL,
 name              varchar(100)     NOT NULL,
 status            enum('waiting', 'active', 'bounced', 'unsubscribed')    NOT NULL,
 confkey           varchar(100)     NOT NULL,
 created           timestamp        NOT NULL default '0000-00-00 00:00:00',
 created_IP        varchar(100)     NOT NULL default '',
 created_agent     text             NOT NULL,
 created_user_id   bigint(20)       UNSIGNED NOT NULL default 0,
 created_country   char(2)          NOT NULL default 'ZZ',
 created_US_state  char(2)          NOT NULL default 'ZZ',
 laststatus        timestamp        NOT NULL default '0000-00-00 00:00:00',
 laststatus_IP     varchar(100)     NOT NULL default '',
 laststatus_agent  text             NOT NULL,
 laststatus_user_id bigint(20)      UNSIGNED NOT NULL default 0,
 PRIMARY KEY (id),
 KEY status (status)
);


CREATE TABLE wp_2_mailpresss_usermeta (
 meta_id           bigint(20)       NOT NULL auto_increment,
 mp_user_id        bigint(20)       NOT NULL default '0',
 meta_key          varchar(255)     default NULL,
 meta_value        longtext,
 PRIMARY KEY (meta_id),
 KEY mp_user_id (mp_user_id,meta_key)
);


CREATE TABLE wp_2_mailpresss_stats (
 sdate             date             NOT NULL,
 stype             char(1)          NOT NULL,
 slib              varchar(45)      NOT NULL,
 scount            bigint           NOT NULL,
 PRIMARY KEY(stype, sdate, slib)
);

Tuesday, August 23, 2011

SEO, Lies and Video Tape (part 5)

In the first first post of this series, I pointed out that SEO companies sell features that can be done easily by most people, thus avoiding cost. As a reminder, those 4 simple, easy-to-accomplish techniques are:
  • Having a descriptive domain name
  • Creating and submitting a sitemap
  • Descriptive titles and meaningful content
  • Getting other websites to link to you
In this post I want to explore the final item:

Getting other websites to link to you
For a long time, this was Google's secret sauce. Instead of using metadata, or just the words on your webpage, or some other easily-manipulated option to set your page ranking, Google looked for links TO you that existed on other sites.

They still do this, and it's still useful. It's also useful because it's an indicator of how popular your webiste ACTUALLY is on the internet. If people are talking about your site - linking to you, repeating your posts, etc - then you are popular. If they're not, you're not.

However, it's a hard trick to pull off without resorting to various "link exchange" programs and such. One thing you can do that helps a bit is make it very easy for readers to "like", "retweet", "+1", "Stumble" and "Digg" your pages and posts. Each of those creates another link out in the internet that can be picked up by Google and contribute to your page ranking.

But you have to make people WANT to click those options, and these days people blow right past them.

The best advice I can give you goes back to the previous point - creating meaningful content that people care about, and it will be repeated by others and thus improve your ranking.

Monday, August 22, 2011

Second Post on SolarWinds Tricks

http://leonadato.blogspot.com/2011/05/i-posted-this-over-on-thwack.htmlThis is the second part of a 3 part series I posted over on www.thwack.com about ways to make their premier toolset - Solarwinds Orion network performance manager (NPM) jump through hoops. You can find the first post here (or on Thwack, here)



This is the second in a series of posts where, in the name of giving back to the community, I'm going to share some of the customizations that make SolarWinds a little more robust for us and our customers.

First, a little background about my company and how we use SolarWinds. Sentinel is an IT solutions provider that focuses on communications technologies, Data Center, and Outsourced / Managed Solutions.

One of our key services (and the thing that lets me put food on the table) is a remote monitoring solution (based on SolarWinds, of course). All we have to do is drop a VPN router onto the customer's premises and set up NAT's for the devices they want (read "pay us") to monitor, and we're good to go. This is a perfect fit for our customer base, where they don't want to divert resources for the ongoing investment in staff, software, and skills to set up an enterprise-wide monitoring and management solution (not to mention figuring out who's going to handle all those pesky tickets).

So our model - where we have many independent customers with different sets of values, different monitoring requirements and so on has driven us to come up with some customizations that focus on:
  • How to stop alerting on various devices (because of pilot projects, new customer onboarding, or maintenance windows) while continuing to collect statistics
  • How to set thresholds for devices when that could be different on nearly a device-by-device basis
  • How to ignore alerts based on the built-in monitors for CPU/RAM, etc on older or closed-architecture devices where a custom OID gave better data
This post is going to look at our solution for the second bullet - how to set thresholds for devices on a device-by-device basis. You can find the discussion about the first item here.
If you've worked with SolarWinds alerts for more than 15 minutes, you probably already know the slippery slope it presents. You start by setting an alert for CPU with a pretty logical threshold of "> 90% for 10 minutes". Soon after that one of two events happen (or both. It depends on your environment)
  1. Device "owners" complain about all the events you are missing because the threshold is too high
  2. The people receiving alerts complain they are getting too many false alarms because the threshold is too low
About this time you realize that various devices - depending on their machine type, OS, role, or even the specifics of that particular system) require custom thresholds.

So you start copying alerts and modifying them. And when you turn around, you realize you've got 237 different "high CPU" alerts and the logic of each of them ("machine type = "Windows" and IP_Address contains 1.2.3 and (custom field) IS_IMPORTANT = 1 and....") is enough to constipate Einstein.

In a fit of pique during a monitoring review meeting, you throw your hands up in the air and say "why don't I set up a separate threshold for Every. Flipping. Device?!?!?!"

Assuming you retained employment at your company after that outburst, I want to let you in on a secret:

You can.

The key here, much like the one presented earlier for muting, is a couple of custom fields and a little bit of Alert logic.

The Custom Fields
You can call them anything you want, but they should be numeric. Here at Sentinel, we've got ALERT_CPU, ALERT_RAM and ALERT_VOL. The first two go in the nodes table, the last one (logically enough) goes in the volume table.

The Alert Logic
Now the we can alert on individualized thresholds for those elements on a node-by-node basis, leveraging the alert system's "complex conditions" option: "where (field or value) xxx is greater/less/equal to (field or value) yyy".

The alert logic for CPU would look something like this:

Where ANY of the following are true
  Where ALL of the following are true
     ALERT_CPU is empty
     CPU_LOAD is greater than 90
  Where ALL of the following are true
     ALERT_CPU is not empty
     the field CPU_LOAD is greater than the field  ALERT_CPU

This has the effect of setting a default threshold for any device that doesn't have a specific value in the custom alert field (that's the first "Where ALL" section; but if it DOES have a value then compare whatever number is there to the field ALERT_CPU.

For those who are following along from my previous article, here's the logic that includes the "mute" options:

Where ANY of the following are true
  Where ALL of the following are true
     N_MUTE is not equal to YES
     ALERT_CPU is empty
     CPU_LOAD is greater than 90
  Where ALL of the following are true
     N_MUTE is not equal to YES
     ALERT_CPU is not empty
     the field CPU_LOAD is greater than the field  ALERT_CPU

This is also useful if you want to MUTE just one element - say CPU. You have a device that simply "runs hot". You don't want CPU alerts, but you also don't want to mute the whole node, because you still want RAM alerts, interface alerts, etc. Set the ALERT_CPU to 105, and you will continue to collect CPU stats, but (since the CPU can never go above 100), you won't ever get a CPU alert.

IN THE NEXT (AND FINAL) POST: How to ignore built-in alerts for CPU, RAM, etc. in favor of custom OIDS.

Friday, August 19, 2011

Why You Should Pay Attention in Math Class

As exemplified by this article on Gizmodo (AT&T's New Text Plan Overcharges You By 10,000,000 Percent), a good grasp of math - even relatively simple number sense - is never a bad thing to have.

The key part of that article:
"AT&T charges $25 for 2 gigabytes of mobile data, which states how much they think their bits and bytes are worth. That comes out to 80 megabytes per dollar. 80 megabytes will get you 500,000 text messages—assuming you're writing the largest possible message, which you're often not (i.e. "Hey" "Nothing" "lol").

Now divide that dollar by the 500,000 potential texts. That comes out to $0.000002 per text—two ten thousandths of a cent. A very, very, very small amount of money.
Now, let's say you send 5,000 texts a month. That's a large, though wholly realistic number. Multiply that by the above worthless cost per text, and you've got—hold onto your wallet!—$0.01. A penny for five thousand texts, according to how much AT&T says its data is worth in a data plan.

But outside of the data plan? Oh boy! Things get very different very fast. And by very different, I mean inordinately overpriced. Those same 5,000 texts, at a rate of $0.20 per message, will cost you $1,000. Not a penny—a grand. Two very different prices for literally the exact same thing."

Read the rest of the article for the painful details - especially if you are on AT&T. Of course (again, as Gizmodo states), the rest of the carriers will likely follow suit in the near future.

Because, you know, it's all about the customers. And by "customers" I mean "stockholders", not "people who buy our stuff".

Thursday, August 18, 2011

SEO, Lies and Video Tape (part 4)

In the first first post of this series, I pointed out that SEO companies sell features that can be done easily by most people, thus avoiding cost. As a reminder, those 4 simple, easy-to-accomplish techniques are:
  • Having a descriptive domain name
  • Creating and submitting a sitemap
  • Descriptive titles and meaningful content
  • Getting other websites to link to you
In this post I want to explore the third bullet-point:

Descriptive titles and meaningful content
When a search engine looks at your site (and especially the new stuff on your site), it's reading words. I know that sounds dumb when you read it in print, but you have to keep it in mind. And just like you learned in 5th grade English about Newspaper styles, the title is given the highest importance, then the subtitle, then the first sentence, then the paragraph.

So, if your title says "the Rubiyat of Omar Kayam" and the first paragraph is a long series of jokes about Olivia Newton John's "Xanadu" album, the search engine is going to have a hard time placing your post with search results for "how to potty training kittens" - which you got around to mentioning around paragraph 4.

So while it sounds boring, following a standard news article style format is a great way to help you rank higher on search results.

Finally, if your website runs on blog or CMS software (rather than static web pages), consider changing from permalinks that are numeric (http://www.mysite.com/index.php?postid=115) to something more descriptive

One more thing for bloggers: Canonical URL tags. Your website actually has several URL's including www.mysite.com, http://mysite.com, mysite.com/index.html, and a few others. Typing any of them would get you to the home page. And search engines treat each one as a separate website, which means your page ranking could get divvied up among the potential options. To avoid this, you enable canonical tags - depending on the system you are using there are various plugins or template options, so that everything on your site is ALWAYS formatted the same.

But all of that is just formatting tricks. the other part of this bullet point is much harder: creating meaningful content. There's no single tip I can give you to do that. You know your audience (or you should, and shame on you if you have no idea who it is your are trying speak to!), you know what they want to hear about, and you know how much (or how little) they can tolerate reading in a sitting. Should you break up your articles into smaller "nibbles" and have them post on successive days? Or should you create one long masterpiece that has everything all in one place? Do you talk about the thermodynamics of microwave hairdryers or the latest fashion trends found in "Civil War Re-Creationist Magazine"? Only you can know that.

What I _can_ tell you is that, if your audience finds your posts meaningful they will not only keep coming back for more, they will repeat what they've read in tweets, facebook posts, Google+ articles and more.

And that feeds into my next post...

Tuesday, August 16, 2011

Another great use for the Periodic Table

...although I (along with my good friend Doug) have to wonder where Pandora is.

http://blog.favo.rs/periodic-table-social-web/

SEO, Lies and Video Tape (part 3)

In the first first post of this series, I pointed out that SEO companies sell features that can be done easily by most people, thus avoiding cost. As a reminder, those 4 simple, easy-to-accomplish techniques are:

  • Having a descriptive domain name
  • Creating and submitting a sitemap
  • Descriptive titles and meaningful content
  • Getting other websites to link to you
Last time I discussed domain names. In this post I want to explore the next item:

Creating and submitting a sitemap

A sitemap is what the name implies - a map of your site. The point is that the automated indexing routines ("crawlers") from Google, Yahoo, Bing and others can work faster if they have a list of web pages to scan. And that's what a sitemap is.

NOT having a sitemap doesn't mean your site won't get indexed. But it does mean that pages might be overlooked or that updates to your site won't show up on search queries as quickly - all of which translate to lost visitors.

Submitting your sitemap isn't necessary - all the search engines will find it eventually unless you've named it something completely weird and/or stuck it into a stupid directory name like "golf scores". That having been said if you are being a diligent web designer, you can push the issue so that you remove all doubt.

In most cases, you will need an account with each of the search engines in order to submit your sitemap. That also shouldn't be an issue for you since, as a web designer, you ought to have those accounts anyway.

Also, let's be honest: Google is THE game in town. So it behooves you to get a Google Webmaster account (as well as a Google Analytics account). Neither cost you anything. I'm not going to take time here to go over all the bells and whistles of these tools, but you can get the ball rolling by going to http://www.google.com/webmasters/

To add your sitemap to Google:
  1.     Sign in to your Google Webmaster account.
  2.     From the dashboard, click the "Add A Site" button
  3.     Go through the steps to verify the site
  4.     Click on the site to bring up it's specific stats
  5.     Click "Site Configuration" from the sidebar to expand the list
  6.     Click "Sitemaps"
  7.     Click the "Submit a Sitemap" button and follow the prompts

To add your sitemap to Yahoo!:
  1.     From the Search Engines page, copy the link to your Sitemap file.
  2.     Sign in to your Yahoo! account.
  3.     Enter the URL for your site in the Submit Site feed field (e.g., http://www.yourdomain.com)
  4.     Click Submit Feed.

Bing?

Creating a sitemap is very simple. The instructions below really depend on whether your site is a "regular" static site made up of a bunch of pages, or if it's more like a blog.

Regular Sitemaps

To create a sitemap for your regular site you have to generate it or write it by hand. If you immediately thought "oh let's do it by hand, that sounds exciting" then I'm done speaking to you. Please leave my website. I'll wait.

OK, now that the mouth-breathing village idiots have left the building, we can move on.

The easiest way to do this as a one-time-shot is to use one of the (many) online sitemap generators. For the sake of example, I'm using http://www.xml-sitemaps.com/ . But feel free to use any one you want.
  1. Go to http://www.xml-sitemaps.com/
  2. Enter the webiste URL
  3. Enter your change frequency (that tells the web crawlers how often to come back and recrawl the site.
  4. Fill in any other options based on the Sitemap generator you are using and click Go/Start/Run/Whatever
  5. Once the process is complete, you be presented with downloadable versions of your sitemap in at least a couple of formats (xml, txt, html). Go ahead and pull them down to your local computer
  6. Now upload them to the root folder on your website.
  7. Finally, using the steps I outlined earlier, submit your sitemap to Google, Yahoo, and wherever else your fancy deems important.

Blog Sitemaps

Setting up a sitemap for a blog is even easier than for a regular site, because most blog software have add-ons or plugins to do the job for you.

For example, in Wordpress I recommend adding the pluging "Google XML Sitemaps". From there the options are very straightforward and it even submits the sitemap to Google for you.

TRICK: Skipping webpages and folders with robots.txt

While it might sound counter-intuitive at first blush, every site has folders and even web pages that you DON'T want to have show up on search results. Things like the "images" folder where you put all your webpage graphic elements, or the webpage "testme.html" which you use to test out new stuff before adding it to the live pages, or the "documentation" folder where you store all the design information about the website

(What's that, you don't HAVE documentation on your website? Here's some advice: Don't say that out loud to your customer.)

To get Google, Yahoo, Bing, etc to overlook pages, you use a robots.txt file. This file - also found at the root of your website - tells web crawlers which pages to look at and which to skip. In it's simplest form, it looks like this:
User-agent: *
Disallow:
This tells the web crawler that this file applies to ALL search agents, and there are NO pages disallowed. Using my example above, let's say you wanted to tell google NOT to index /images, /documentation and testme.html. Your robots.txt file would look like this:
User-agent: *
Disallow: /images/
Disallow: /documentation/
Disallow: /testme.html
While there is a lot more the robots.txt file can do for you, I want to leave you with one reminder: Robots.txt is a well-known filename that anyone can pull up on your website. So don't use it to try to hide things from visitors because robots.txt  is basically a big fat finger pointing to those directories saying "look here for good stuff". 

Make sure you check back here (or better yet, use "sign up" options in the sidebar to add this to your RSS feed or receive email notifications) for the next installment where we rise up out of the weeds of step by step instructions to talk about descriptive titles and meaningful content.

Sunday, August 14, 2011

Telecommuting: I have had this conversation

Well, except for the abject honesty of the boss. But as a self-described "telecommuting evangelist", this comes amazingly close to real-life discussions.

 copyright Scott Adams, etc etc, blah blah. As if you couldn't tell.

Friday, August 12, 2011

SEO, Lies and Video Tape (part 2)

In my last post, I pointed out that SEO companies sell features that can be done easily by most people, thus avoiding cost. As a reminder, those 4 simple, easy-to-accomplish techniques are:
  • Having a descriptive domain name
  • Creating and submitting a sitemap
  • Descriptive titles and meaningful content
  • Getting other websites to link to you
In this post I want to explore the first item:

Descriptive Domain Names
This may be the easiest of the 4 items, but also the one that could cause you the most grief. If you have a site that sells timeshare apartments for hamsters in Aruba, then a domain name of "hamster-aruba-timeshares.com" is going to automatically rank higher in searches than "hamsterpads.com" or "bluewatersandexercisewheels.com". Even if the latter two are poetic and evocative, the fact is that search engines look at the domain name itself to see if there is a match.

It also means that if you are selling wicker baskets in Tupelo, Mississippi - and even though your company may be called "Southern Criss Cross" - you are better off with a domain name like "TupeloWicker.com". Unless you have so much corporate recognition that people will search for you at "southerncrisscross.com".

But remember the point of searches - it's there to help people to who DON'T already know you. The ones who already do will find you anyway.

One option is to buy a couple of domain names and point both to the same website. Just don't go hog wild on that. Some search engines will actually rank your site LOWER if they see you have 5 or 6 domain names all pointing to the same place. Also, if you have 5 domain names, your page ranking stats could get divvied up between each of those names, resulting in a lower overall page rank. (although there is a way around that - "canonical URLs", which I'll describe later in this series)

Two or three domains, however, should be OK if you really think you need it.

The trick in all this, as you can imagine, is to find something that is memorable while still being descriptive.

Good luck!

Wednesday, August 10, 2011

SEO, Lies and Video Tape

Search Engine Optimization (SEO) companies promise big money (for those using their services, naturally). They toss around figures about tens of millions of online searches every day that could be generating thousands of dollars in revenue, but only if your site ranks high enough on searches - only if you act now, no time to wait, operators are standing by!

Page Rank - how high up a website appears in a search query - is the holy grail for web designers (and those who hire them). Page ranking holds the key to new visitors, who translate into profits (or at least attention which is basically the same thing in internet terms). The higher up you are in a set of query results, the more likely someone is to click your link and visit your site.

SEO (Search Engine Optimization) is the process of improving pagerank. SEO spins the flax of simple Google or Yahoo queries into click-through gold.

This makes Consultants specializing in SEO, in effect, prospectors who know that "there's gold in them thar clicks" and claim to hold the secret to mining those veins of data that will yeild untold riches.

And because of that, there's a lot of of people who work hard to make all that prospecting sound very difficult, specialized, arcane and - most importantly - expensive.

Based on my experience, it's not. In fact, it comes down to 4 key techniques. Everything else is snakeoil. All 4 techniques are things that ANYONE - even the greenest novice - can do. It doesn't take a masters degree in programming. It doesn't require hours of setup or maintenence. Most of it can be done in about 2 hours.

Those 4 simple, easy-to-accomplish techniques are:
  • Having a descriptive domain name
  • Creating and submitting a sitemap
  • Descriptive titles and meaningful content
  • Getting other websites to link to you
That's it. No midnight sacrifices of twinkies to the pantheon of database deities. No clandestine payments to Google.

In the next several posts I'm going to break down each of those items. Meanwhile, I want to answer what is probably going through your head right now:

So what am I paying for?
Aside from the 4 things I've already mentioned, what do typical SEO companies do for you? Well, it's not exactly nothing, but as I mentioned before, the lion's share of SEO improvements are the things I've already given you for free.

They might offer you services - helping you add a widget to retweet articles or to let readers "like" you on Facebook. They might offer you analytics - figuring out what your relative page rank is now and how many clicks you are getting so you know where you stand.


These are all useful features, but are things any good web designer/administrator should be able to provide for you. They are things lots of supposedly "novice" web admins can do too. Caveate Emptor!

The only other thing I've seen is where some SEO companies own and run a series of unrelated websites called link farms. You pay them to add you to all (or some) of their websites, which could improve your page rank based on the last tip I gave you ("Getting Other Websites To Link To You"). The interesting thing is that Google adjusts for this - the "value" of a link is relative to the value of the page that it appears on. So a link coming from a page that is part of a link farm is worth almost nothing. But most customers don't realize that. They just see that their link will appear on 20 other websites and think "I'm going to be sooo popular!".

Stay tuned for the rest of this series where I give details (and in some cases step-by-step instructions) on how to be your own SEO expert.

Wednesday, July 13, 2011

Google+

Someone on Facebook just asked me what was the compelling reason to look at Google+. Here's my response:


Seriously, it's less scattered than FB. It's easier to manage people you are associated with, and to decide which messages/posts/whatever each group sees on a post-by-post basis (if you want to get down to that level). There are NO ads.


Right now my FB sidebar is telling me that 3 half-naked girls younger than my daughter are looking for me, and another box saying that I've been chosen because there are "Hacker's Wanted". I'm not sure which is more offensive.


It's very very VERY young right now. There are no hooks to/from Google+ to twitter, FB, blogs, etc. But there's a momentum there that indicates to me it will get better, and get better very quickly.

When FB came out, those of us dabbling with MySpace jumped ship almost immediately. It was clear that FB was flat-out better in every way - the "culture" of the tool, the layout, etc. Here again, looking at G+, you can see this is the next step in the evolution of social media. LinkedIn was *almost* it, but it never quite made the leap to being really REALLY social. It was (and still is) a business tool. And I like it that way.

In the milieu of computers, sometimes to make something better it requires a complete re-write from the ground up. G+ is that rewrite.

IMHO. YMMV. Caveate clickor. Objects in the rear view mirror may be more social than they appear.

Tuesday, May 17, 2011

Yes, it's big. Get over it.

A couple of years ago, I had the opportunity to buy any laptop I wanted (within reason.) My thought was that I wanted something larger than a 15.x" screen, so that put me in the 17.x" camp.

I work with a great VAR so I just bounced ideas off him, along with a few models I had found.

He responded that he had a laptop that had 1Gb more than my choice, and 100Gb more disk. It was only $100 more. Was I interested? Of course I was.

It was, he told me, a little bigger than 17" as well.


The Toshiba Qosmio G55-804 is 18.4" to be exact. Big enough that you can't fit it into most standard laptop bags. But it still fits under the seat when you are taking a flight.

I actually like this laptop a lot. It's got a great screen, good keyboard and it runs well under Ubuntu.

What I *don't* like about it is that everyone feels obligated to comment on it. There are guys at work who have - I'm not making this up - said something about it every time they have seen it. For over a year now.

It's a laptop, people. It's just a laptop.