If you’re just beginning to play with SSIS on a 64-bit box you’re pretty likely to hit a roadblock if your data sources have 32-bit ODBC drivers. The error will become obvious immediately when you come to execute your package in BIDS (Business Intelligence Development Studio)…
The ODBC Driver Manager error reported in the Output window will be: ”The specified DSN contains an architecture mismatch between the Driver and Application”. Thankfully this is a pretty easy one to sort out, all you have to do is…
- Right-click on the Project name in the Solution Explorer and select Properties…
- Choose “Debugging” in the left-hand pane…
- Change the Run64BitRuntime value to False (by double-clicking on it).
For reference the help-text states that if the 64-bit SSIS runtime is not installed this setting is ignored so if you’re working on a 32-bit box in all liklihood you wouldn’t experience this issue.
- Click OK and re-run your package.
Starting my third day at SQLBits with a hat-trick of talks on technologies I’m unfamiliar with was a bit of a head-bender but an enjoyable one nonetheless. The first talk I chose was a great overview of how to use completely free tools (SQL Server Express 2008 R2, among others) with completely free spacial data (Ordnance Survey’s Open Data) to create spatial reports in Reporting Services. Since it’s not an area I’m working in at the moment I didn’t get any major take-aways but I do have an understanding of what’s possible and how to go about it – if you’re in the same boat you should check out FWTools, Shape2SQL and Grid InQuest.
The second talk was Matt Whitfield‘s ‘CLR Demystified’ and not being a developer I’ll admit that much of it went over my head but I get the basic principles and most importantly I know what’s possible and where I might make use of CLR. The most interesting avenues for me are the ability to write custom aggregate functions and define custom data types – I was also impressed with the opportunity to increase performance in certain text processing / forward log parsing situations.
The next talk I attended was a gentle introduction to PowerShell and James Boother did a good job of showing where it might be useful for admin tasks, in particular the example of purging old backup/log files based on age lit my eyes up. There was also a demo of PowerShell authenticating with Twitter and posting tweets as admin alerts, alas the demo failed but that could well have been a timeout on the Uni’s WiFi network or just plain old demo-gremlins. I must say thou that I still can’t view PowerShell with some sense of disappointment because with all it’s flexibility and power it’s so damned wordy and many of the tasks it performs could be achieved with less code that a good old Bash script.
As with Friday I attended Quest‘s lunchtime session run by Kevin Kline, Ian Kick, Brent Ozar and Buck Woody – they’re some of the most experienced guys in the SQL community and when you get them together they’re funny as he’ll too so I was both entertained and informed in their myth-busting quiz.
My first afternoon talk was Gary Short’s session on NoSQL which predictably sparked a few polite but irate rebuttals from argumentative DBAs but the session itself was an excellent whistle-stop tour of the predominant NoSQL technologies and use cases. I was encouraged to hear from someone experienced in the field that nobody has quite put together all the pieces to hook up BI tools (that traditionally expect relational/dimensional models or OLAP sources) to the NoSQL back-ends, it’s a shame since I might need to do so pretty soon – I guess I’m going to have to get my hands dirty then!
The final talk was from Kevin Kline of Quest who covered SQL Injection, it was an informative talk that gave me pause for thought about a couple ‘best practices’ that I probably ought to harden a little. Kevin recommended a few tools that I’ll definitely be checking out at some point, notably:
- HP Scrawlr
- Source Code Analyser for SQL Injection
- Assessment and Planning Tool
- Discovery Wizard
And a few handy sites/articles:
Once more it was a great conference and the free day was every bit as good as the paid day, I can honestly say that I walked out of the event already looking forward to the next one.
Many people only attend the free ‘Community Day’ of SQLBits and I can understand why given the cost (£125) for the Friday sessions but if SQL Server is how you make your living I really do think it’s worth the money. It’s not even that the Friday sessions are significantly different in content, it’s really just more of the same high level of quality you get on Saturday but when it comes to SQLBits more is definitely better.
It’s always a tough choice picking which sessions to attend so it’s often best to go with speakers you know will be good so despite having spent the entire previous day with Maciej Pilecki in the SQLBits Training Day I made my first session Maciej’s SQL Server Statistics talk. Despite a few initial technical gremlins the talk went well and gave a few insights into how statistics are used by the query optimiser with the key takeaways being to always keep both AUTO_CREATE_STATISTICS and AUTO_UPDATE_STATISTICS turned on, to consider turning on AUTO_UPDATE_STATISTICS_ASYNC (does not force queries to wait for stats to be updated but subsequent queries will benefit) and to run sp_updatestats after any major updats or to reindex your tables periodically.
My next session was Brent Ozar‘s Virtualisation and SAN talk, this gave me a whole load of questions to go back to my SAN Administrator with as well as a whole load of tests I intend to perform before I deploy my next Data Warehouse on a Hyper-V guest. One concept that was completely new to me was the Balloon Driver that hypervisors use to encourage Windows to free-up RAM, since SQL Server is a good citizen it can end-up flushing the entire Buffer Pool and wrecking your performance – the solution is to ensure that Dynamic Memory is disabled in the Hyper-V Manager. Some great related resources can be found at…
The lunchtime sponsor talk I chose was the one from Quest that covered IT Horror Stories, it was a brilliant session with plenty of audience interaction and steered clear of pimping any specific Quest products but instead just showed that the people that work there are experienced, pragmatic and generally just nice guys. I think this approach is far better than the extended product demos that many software companies tend to give as their lunchtime sessions as they’ll only be of interest if you’re genuinely considering the product and if you’re not they’ll do little to increase brand awareness with a room full of bored people on Twitter of Facebook.
After lunch I went for Buck Woody‘s talk on Business Continuity which provided a few simple paths and the crucial tasks to help get people started on a business-relevant disaster recovery strategy. I was particularly impressed with one of the central themes of the talk which was (I’m reading between the lines a little) that even if you think it’s ‘not your job’ to put a DR plan in place, it’s likely that as the company’s ‘Data Professional’ people will still look to you in times of failure and if you’ve already done all of the planning you’ll be the guy with a calm head solving the problem and if you’re not that guy – start getting your CV ready. Despite having heard the name and having read a few of his blog posts over the years I’d never heard Buck speak and he’s great so if you get the chance to see him you definitely should.
Well that wraps-up the day nicely, I’ll be posting Saturday’s round up soon after I’ve written it!
Categories: DBA, Events, Microsoft SQL Server Tags: Brent Ozar, Buck Woody, Business Continuity, Disaster Recovery, DR, Hyper-V, Microsoft SQL Server, SAN, SQL Server, SQLBits, Virtualisation, Virtualization
As a SQL Server Professional I love SQLBits, I was there at the first one and I’ve attended most of them since but until now I’d not managed to attend one of the paid training days – mainly because of the cost. It’s not that the cost is high, £250 for a day’s training from some of the best minds in the industry is quite frankly a bargain but it’s still a lot if you’re paying out of your own pocket and my previous employer had a bit of a weird training policy. Thankfully now I’m in the situation that whilst having to be prudent with budget if I really think something is good value for money and worth spending the time on my employer is likely to support me so I finally managed to attend a Thursday pre-conf event.
Choosing one of the courses was quite an ordeal and I was very tempted by Brent Ozar‘s Virtualisation and SAN course, part because having seen him before I know he’s a great speaker and also because it’s going to be directly relevant to my work very soon. In the end I chose Maciej Pilecki‘s course on the SQLOS because the more I understand about SQL Server’s internals the more I understand everything else I learn about SQL Server – I read Kalen Delaney’s Inside SQL Server 2000 cover-to-cover several years ago and it pushed my understanding of the field forward immensely.
I won’t regurgitate the content of Maciej’s course but he effectively covered a massive range of topics and introduced me to several new concepts as well as strengthening my knowledge of others, the rough outline was:
- SQL OS history, services and DMVs.
- Using Affinity Masks effectively.
- Non Uniform Memory Access (NUMA)
- Software NUMA.
- Configuring TCP-NUMA affinity.
- Virtual Address Space on 32 & 64-bit
- AWE and Large Pages
- Buffer Pool architecture and memory allocation.
- eXtended Events
Over then next few weeks I may blog some more detail of what was covered in the course but I’ll only do so if I find the time to put into some of my own research and write my own examples – it’s OK to stand on the shoulders of giants but it’s not OK to copy work from over their shoulder!
Maciej was a great speaker, extremely enthusiastic and well-deserves his reputation as the Dr. House of SQL – out of the many hours of training I’ve received in my career this was definitely one of the most beneficial and ‘action packed’ – all killer, no filler. If you ever get the chance to check-out one of his courses in future I’d recommend it, his next public session will be in Germany (click here for more info).
Until a couple of months ago I’d never heard of the Open Tech conference, it’s organised by the UKUUG (the UK’s Unix & Open Systems User Group) but it was recommended by someone in the London 2600 mailing list and I had the day free so I thought I’d give it a whirl.
Being an occasional rather than hard-core Unix/Linux user I was worried that much of the conference would be beyond my technical comfort-zone, this can sometimes be a good thing because it forces us to learn but as it turned out the Open Tech conference focused much more on the Open Systems part of the UKUUG’s mandate and concentrated particularly on Open Data in the government and media spaces. After reading the conference schedule and realising that it was all about data I thought it was definitely worth a look so I pre-registered (the only cost being a ridiculously cheap £5 on the door).
The attendees were quite mixed but dominated by developers working in the academic, government and media sectors (incl. journalists), giving the event a different atmosphere to most technical conferences I’ve been to which are usually either centred around commercial users or the security/hacking community. The result of this was that there were a lot of interesting people there who’ve really made use of the Open Data available, shared their work and in some cases made a real difference to government policy as a result. Before the sessions started I ended up talking to a few people including one of the guys behind the excellent TheyWorkForUs.com which despite the government’s Open Data initiatives still has to rely on good old fashioned web scraping.
I attended a couple of talks about the data.gov.uk projects which included some great examples of how the data can be used including talks by the people behind Where Does My Money Go and the ASBOrometer. As a complete outsider to the Open Data world I’ve been extremely impressed with the amount of data that’s been published and the community that’s grown up around it, the best part being that the ‘community’ is made up of developers and civil servants working together to achieve something that I honestly think is quite ground breaking.
Stepping away slightly from the government-oriented data projects Manuel Corpas delivered an interesting talk entitled “Who Owns my Genome Data” essentially bringing up issues raised by companies such as 23 and me who will provide a personalised report for you based on your genome (all you provide is a saliva swab) but then retain the rights to use your data and future innovations derived from it. Patrick Bell followed with an introduction to the British Geological Survey’s Open Geoscience site which seems pretty good on the face of it and they seem to be committed to open data but I found it difficult to think of applications for the data that would make sense to me personally.
The last batch of talks I attended related to mobile phones and mobile content, the first being the Wild Ducks Smartphone talk by Sebastian Brannstrom from the Symbian Foundation. I hadn’t realised until the talk that Symbian still accounts for over 40% of the global Smart Phone OS market even though Android and iOS take centre stage in the public’s (or the media’s) eyes, with the OS now being Open Source they’ve launched a project to build a handset based on easily available components that you could hack together yourself, based around the Beagle Board with some extensions. The talk was very interesting and it was great to see a demo unit that Sebastian passed around the audience, you can see more on the Wild Ducks blog. The last talk I saw was by Terence Eden and was titled “Why Doesn’t Your Site Work On My Phone?”, essentially it provided a timely warning to developers not to run after the ‘hot potatoes’ of Android and iOS when much of the global mobile market (81%) still relies on ‘feature phones’ which by-and-large have basic HTML capability but not too much more, some handy resources he pointed towards were WURFL, Device Anywhere and Device Atlas.
All said and done I’d say the conference was definitely an eye-opener and if I can make the next one in 2011 then I’ll definitely go but I’ll feel like a failure if I turn up having done nothing with all the knowledge I gained from the 2010 presentations so I better hurry-up and get my hands dirty with some of this Open Data…