Wednesday, July 2, 2014

Py2Exe Troubleshooting

So I had this Python script that I wanted to bundle up in a binary to distribute to Windows systems. It worked fine when run with the Python interpreter, but was throwing a strange error after being compiled by Py2Exe:

Traceback (most recent call last):
  File "download_random_files.py", line 2, in <module>
  File "requests\__init__.pyc", line 58, in <module>
  File "requests\utils.pyc", line 25, in <module>
  File "requests\compat.pyc", line 7, in <module>
ImportError: cannot import name chardet

Which I thought was interesting, cause I had no clue what chardet was.

If you're an astute observer, you'll read the rest of the error message.. :P 


Ok, so it's related to the requests package, so what? That was pretty much where the trail ended for me, all the troubleshooting I found online was only *loosely* related to my error.


Basically, the issue is in the py2exe setup.py file.

My initial setup.py file was basically just...

from distutils.core ipmort setup
import py2exe
console=['controller.py']


Which isn't taking advantage of any of the features of py2exe. After some digging, I found a quick and dirty solution. For some reason, py2exe was having problems locating the requests package, which was necessary for the script to run. I found that by explicitly specifying the requests package in my setup file, the issue corrected itself.

from distutils.core import setup
import py2exe

setup(
  console=['controller.py'],
  options = {'py2exe': {'packages': ['requests']}})


With that said, there are some really cool options/features you can add when you convert your python to a binary. Check them all out at: http://www.py2exe.org/index.cgi/ListOfOptions


Tuesday, May 27, 2014

I found this SQL post so awesome that I wanted to mirror it here so it doesn't go away. Really nice explanation of indices in SQL Server.

Source: http://www.mssqltips.com/sqlservertip/1206/understanding-sql-server-indexing/
Author: Greg Robidoux


Problem
With so many aspects of SQL Server to cover and to write about, some of the basic principals are often overlooked. There have been several people that have asked questions about indexing along with a general overview of the differences of clustered and non clustered indexes. Based on the number of questions that we have received, this tip will discuss the differences of indexes and some general guidelines around indexing.
SolutionFrom a simple standpoint SQL Server offers two types of indexes clustered and non-clustered. In its simplest definition a clustered index is an index that stores the actual data and a non-clustered index is just a pointer to the data.  A table can only have one Clustered index and up to 249 Non-Clustered Indexes.  If a table does not have a clustered index it is referred to as a Heap.  So what does this actually mean?
To further clarify this lets take a look at what indexes do and why they are important. The primary reason indexes are built is to provide faster data access to the specific data your query is trying to retrieve. This could be either a clustered or non-clustered index. Without having an index SQL Server would need to read through all of the data in order to find the rows that satisfy the query. If you have ever looked at a query plan the difference would be an Index Seek vs a Table Scan as well as some other operations depending on the data selected.
Here are some examples of queries that were run.  These were run against table dbo.contact that has about 20,000 rows of data.  Each of these queries was run with no index as well as with a clustered and non-clustered indexes.  To show the impact a graphical query plan has been provided. This can be created by highlighting the query and pressing Control-L (Ctrl-L) in the query window.
1 - Table with no indexesWhen the query runs, since there are no indexes, SQL Server does a Table Scan against the table to look through every row to determine if any of the records have a lastname of "Adams".    This query has an Estimated Subtree Cost of 0.437103. This is the cost to SQL Server to execute the query. The lower the number the less resource intensive for SQL Server.

2- Table with non-clustered index on lastname columnWhen this query runs, SQL Server uses the index to do an Index Seek and then it needs to do a RID Lookup to get the actual data. You can see from the Estimated Subtree Cost of 0.263888 that this is faster then the above query.

3- Table with clustered index on lastname columnWhen this query runs, SQL Server does an Index Seek and since the index points to the actual data pages, the Estimated Subtree Cost is only 0.0044572.  This is by far the fastest access method for this type of query.

4- Table with non-clustered index on lastname columnIn this query we are only requesting column lastname.  Since this query can be handled by just the non-clustered index (covering query), SQL Server does not need to access the actual data pages.  Based on this query the Estimated Subtree Cost is only 0.0033766.  As you can see this even better then example #3.
To take this a step further, the below output is based on having a clustered index on lastname and no non-clustered index. You can see that the subtree cost is still the same as returning all of the columns even though we are only selecting one column.  So the non-clustered index performs better.
5- Table with clustered index on contactId and non-clustered on lastname columnFor this query we now have two indexes.  A clustered and non-clustered. The query that is run in the same as example 2. From this output you can see that the RID Lookup has been replaced with a Clustered Index Seek. Overall it is the same type of operations, except using the Clustered Index.  The subtree cost is 0.264017.  This is a little better then example 2.
So based on these examples you can see the benefits of using indexes.  This example table only had 20,000 rows of data, so this is quite small compared to most database tables.  You can see the impact this would have on very large tables.  The first idea that would come to mind is to use all clustered indexes, but because this is where the actual data is stored a table can only have one clustered index.  The second thought may be to index every column. Although this maybe helpful when querying the data, there is also the overhead of maintaining all of these indexes every time you do an INSERT, UPDATE or DELETE.
Another thing you can see from these examples is ability to use non-clustered covering indexes where the index satisfies the entire result set.  This is also faster then having to go to the data pages of the Heap or Clustered Index.
To really understand what indexes your tables need you need to monitor the access using a trace and then analyze the data manually or by running the Index Tuning Wizard (SQL 2000) or the Database Engine Tuning Advisor (SQL 2005). From here you can tell whether your tables are over indexed or under indexed.

Wednesday, March 26, 2014

Splunk - Basic Custom Search Command Example

Basic Custom Search Command Example(s)

Executing an arbitrary shell script w/o parameters

For this exercise, we will be executing a very basic script with no Splunk parameters. The purpose of this is to execute a python/shell script. You could just execute a shell script directly, but in the likely chance that you will eventually pass data/query, I'm using Python to execute a shell script.
  • Create your test script in /$SPLUNKHOME/etc/apps/<appname>/bin/test.py
  • Example test.py code:
import os
os.system("(cd /splunkscripts/; ./test.sh)")
  • This navigates to a directory I created at the root directory called "splunkscripts" where I house all of the various scripts I use related to Splunk. It then executes test.sh.
  • Example test.sh code:
echo "This is a successful test." > splunktest.txt

  • This will echo this "Hello World" test to splunktest.txt. 
  • Make sure both scripts (test.py and test.sh) are executable via chmod (i.e. chmod 755 )
  • Edit your /$SPLUNKHOME/etc/apps/<appname>/local/commands.conf with the following:

[shelltest]

type = python
filename = test.py
generating = false
streaming = false
retainsevents = false

  • Note: Generating/Streaming/Retainsevents all default to false, but for real world uses you will likely end up generating results. Be aware of these. Read the Splunk docs on custom searches as well: http://docs.splunk.com/Documentation/Splunk/latest/Search/Customsearchcommandexample 
  • Restart Splunk.
  • Go to your appropriate Splunk app where you stored this script and search: | shelltest
  • Navigate to /splunkscripts/ and see if your test.sh wrote out the data to the splunktest.txt.
  • If you get an "Error Code 1", then there is an issue with your Python/Shell code.