Thursday, September 25, 2008
Exadata - has it been in development for two years?
What, no rule-driven content management?
Exadata and the Database Machine - the Oracle "kilopod"
Larry Ellison was quoted in a number of reports saying the Oracle Database Machine "is 1,400 times larger than Apple’s largest iPod".
Larry, when you want to get over that something is big - really big that is, industrial scale even - just don't compare it with a (however wonderful) consumer toy. Not even with 1,400 of them. 1.4 kilopods is so not a useful measure.
By the way, can I trademark the word kilopod please? (presumably not - a quick google found a 2005 article using the same word, and there is some kind of science-in-society blog at kilopod.com).
Thursday, September 04, 2008
Back to the future - or is that forward to the past
At the interview, I found myself less than 100m from the site of my first ever "proper" job (the old Scicon offices are now an upmarket West End hotel). So just 28 years and 1 month later, I will be sauntering along Oxford Street once again.
I'll be commuting daily at first, but I will try to stay down during the week at least some of the time if I can find somewhere cheap, clean and convenient. Any old colleagues around central London - sometime between now and Christmas we should meet up ...
Friday, July 25, 2008
Microsoft acquires DATAllegro
- Curt Monash's DBMS2 blog - all his DATAllegro links are here - as there are so many of them. He also links to other comments
- Philip Howard at Bloor and IT-Director suggests that MS is assembling an entire DW stack with Zoomix and perhaps one day Kalido and Ab Initio as additional components
- Mark Madsen from Intelligent Enterprise says what it means for customers, other vendors and BI
- Kevin Closson with his welcome plain-speaking (and Oracle-centric) viewpoint refers back to earlier posts wondering about some specific details of this emperor's clothes
- Seth Grimes thinks it's a mistake that will be slow to deliver (and may hurt existing DATAllegro customers); it should have been Dataupia
- DATAllegro CEO Stuart Frost sounds happy with his new role
Monday, June 30, 2008
ESB consolidation - Progress buys Iona
Saturday, June 21, 2008
First come first severed (sic)
- A large railway company are looking to hire a Oracle Developer to join there expanding company on a 3 month rolling contract.
- The is major role and candidates will be selected on matching skills and on first come first severed basis
Saturday, June 14, 2008
Vote now to open up Metalink!
There is goldmine of useful information in Metalink and having access to it would optimize the efficiency of people using Oracle toolsets, in my opinion enhancing productivity and by inference Oracle adoption globally which would be win win for everyone.
I've expressed the same thoughts myself in the past. So all of you Oracle professionals who would benefit from access to Metalink but are not included in your employer's arrangements (especially for freelancers like me) - go and vote!
Wednesday, June 11, 2008
Another blogger
Friday, June 06, 2008
BEA Aqualogic broken up
Tuesday, May 13, 2008
Performance problems concatenating LOBs
Thursday, May 08, 2008
MS subpoenas Oracle in Juxtacomm ETL patent case
Monday, April 21, 2008
Latrz - for all those web pages you want to read, just not right now

James Strachan blogged about Latrz here. It's a neat little Google app that lets you bookmark and tag pages you want to read later, then come back and read them (and check them off your list) when you've got a spare few minutes. Coming soon, you'll also be able to share the stuff you liked with your friends / work colleagues.
Monday, April 07, 2008
Kalido Business Information Modeler - everyone should have one
The only negative he comes up with is "the relatively limited number of platforms that Kalido's software runs on: one would like to see it on Netezza for example, or Teradata".
Monday, March 31, 2008
IBM FastTrack for Source To Target Mapping for DataStage
I find it interesting that releasing a simple attribute mapping tool is seen as a major breakthrough for the DataStage/Information Server family; Constellar had more or less exactly that 12 years ago (no glossary though); the UI may not have been quite so business-user friendly, but certainly supported point and click, plus simple integration with metadata repositories.
Thursday, March 13, 2008
Vitria brings Web 2.0 and high transaction rates to BPM with M3O
- "BPM provides standards-based executable modelling (based on BPMN) on top of business knowledge Repository.
- Web 2.0 provides the rich user experience with zero footprint to enable a collaborative design environment.
- Event processing provides the support for rule and process definition and real-time runtime performance based on event driven architecture.
- Only when you combine these together do you get a fundamentally new user experience with multilayer visualization, collaborative modelling environment, business level abstractions and event management"
Leaping shamelessly onto a passing bandwagon, Vitria explains M3O as "think iPhone meets dashboards" (quoted from ebizQ). The idea is that the "iPhone coolness" of the Web 2.0 interface will remove the gap between business and IT people. Well, as long as it doesn't (like the iPhone) lock users into an expensive long term relationship...
This looks like the first fruits from the return of JoMei Chang as CEO last July and the decision to go private, executed last March.
Wednesday, March 12, 2008
Cape Clear-out
Of course, SaaS needs integration, and Cape Clear CEO Annrai O'Toole promises that they will be providing the necessary Integration-on-Demand - but it seems like one of the leading independent commercial vendors has now been marginalised.
Ronan Bradley, former CEO of PolarLake (also Irish - one of our partners at SpiritSoft, and later a client of mine) also worries where the ESB market is going.
Is this a general problem in the middleware market, or is it just 'cos they is Irish? Are small vendors being caught between the rock of large vendors and the hard place of open source? Or is this (as Steve Craggs suggests in a comment on Ronan's post) simply a result of Cape Clear's own hubris?
Sunday, March 09, 2008
Change of scene
Saturday, March 08, 2008
Quantum dot memory
Saturday, March 01, 2008
H-Store - a new architectural era, or just a toy?
Philip Howard's commentary Merchant relational databases: over-engineered and out-of-date? supports the idea that perhaps general purpose relational databases should now be treated as "legacy". He references a paper The End of an Architectural Era (It's time for a rewrite) by Michael Stonebraker and others from MIT.
My first thought was that RDBMS developers such as Oracle have seen off previous architectural challengers - most notably object oriented databases (OODBMS) - in the past 25-30 years. What makes Stonebraker's H-Store any different?
First, a quick summary of the paper:
- RDBMSs were designed 30 years ago - since then memory and cpu have become faster, cheaper and bigger, changing the balance against magnetic (disc) storage
- increasingly they are failing to meet today's complex challenges
- niche solutions have overtaken "general purpose" RDBMS in many areas (eg data appliances for business intelligence; specialist text search engines; etc)
- and now even OLTP, the "core competence" of the RDBMS, is no longer safe; a new approach (such as H-Store) can easily beat traditional RDBMS by cutting out non-functional architectural features (eg: redo logs stored on disc) and achieving the same goals (ACID transactions) in another way.
The paper claims that H-Store can beat a traditional RDBMS at TPC-C style benchmarks; an early version runs up to 80 times faster than "a very popular RDBMS" which itself underwent several days of tuning by a "professional DBA".
H-Store's secret sauce is that it is (in effect) single threaded. It makes the assumptions that all transactions are very fast; then it executes each transaction in turn. This gets rid of the need for complex read-consistency models. Other optimisations include keeping undo in memory (because transactions are short and sharp) and discarding it at the end of the transaction.
Well, go off and read the paper for the details, but here's what I think.
On the negative side:
- As a comparative benchmark, this fails through insufficient disclosure. For a paper that passes as academic, there is remarkably little detail on what they actually did.
- Stonebraker assumes that "most" OLTP systems can be represented by a hierarchical model - what he calls a "constrained tree application" (CTA). As a result these applications are relatively easy to partition over a shared-nothing architecture. I wonder whether this is really the case. Parts of your application may be like that, or they may be like that for some periods (during the online day, for example). But even OLTP applications need to manage longer transactions, complex reporting, and updates to the "read only" tables. In his example, he assumes (section 5.0) that the Items table is read only, so it doesn't break his tree and it can easily be replicated. But we know that new items will be added; others will be re-priced, re-categorised, phased out. Can that be handled without interrupting a 24/7 H-Store style application?
- He also seems to assume that there is only one axis of partitioning - in his case, the warehouse. But over time, the main focus of interest changes. An order is taken at a shop; it is ordered from a warehouse; it is delivered to the customer. Different "CTAs" at each stage. How does the H-Store morph its representation through the course of the information lifecycle?
On the positive side, though, this represents a call to action for the traditional vendors.
- Any performance specialist knows that the best way to tune something is to stop doing it. If we really can change the rules of the game, we can avoid all that expensive "insurance". We're used to making calculated design tradeoffs for performance; this could be just another one of those.
- I suspect that the RDBMS vendors will (more or less rapidly) steal any really good ideas. Stonebraker states that "no [RDBMS] system has had a complete redesign since its
inception". But, like the proverbial axe, RDBMS internals have been refreshed, a piece at a time. Oracle's database kernel has had at least two major rehashes in its lifetime. It's well within Oracle's or IBM's capability to incorporate the more realistic ideas from this paper, and to find a way to blend them with current state of the art. - They can also learn from the approach MySQL has taken of supporting multiple storage engines - horses for courses. Oracle already merges XML, relational and OLAP data stores; building in a high performance OLTP kernel to address specific classes of OLTP application is not at all inconceivable. Although it will lead to all sorts of information lifecycle difficulties, we are already used to migrating data from OLTP to OLAP; with good tool support it should be possible to work round the constraints that allow H-Store to dispense with so much that we normally take for granted.
I may revisit this paper to tease out other issues - for example Stonebraker rants against SQL (perhaps he's never got over Ingres being forced by the market to provide SQL rather than the more academically respectable Quel). H-Store uses C++, and may move to Ruby; the implication is that applications will be object-oriented, making row-by-row navigations (like so many J2EE apps, and suspiciously like COBOL/Codasyl) rather than being set-oriented.
Let's watch this space.
Tuesday, February 26, 2008
SYS_CONTEXT versus V$ views for getting session information
I don't approve of granting access to V$ views willy nilly; best practice is always to grant the minimum privileges necessary to achieve an objective.
Another poster raised the issue of performance. In the past, SYS_CONTEXT was considered slower than direct access to the views.
So here is a test to compare the two:
set echo off feedback off
set timing on
set termout off
variable v_loops number;
exec :v_loops := 1000000;
set termout on
prompt Testing sys_context
declare
l_user varchar2(30);
l_action varchar2(32);
l_module varchar2(48);
l_sid number;
l_loopcount pls_integer := :v_loops;
begin
for i in 1..l_loopcount loop
dbms_application_info.read_module(l_module, l_action);
l_user := sys_context('userenv', 'session_user');
l_sid := sys_context('userenv', 'sessionid');
end loop;
end;
/
prompt Testing mystat
declare
l_user varchar2(30);
l_action varchar2(32);
l_module varchar2(48);
l_sid number;
l_loopcount pls_integer := :v_loops;
begin
-- note this only gets one of the three pieces of information
for i in 1..l_loopcount loop
select sid
into l_sid
from v$mystat
where rownum = 1;
end loop;
end;
/
And here are the results:
C:\sql>sqlplus testuser/testuser
SQL*Plus: Release 10.2.0.1.0 - Production on Tue Feb 26 22:23:10 2008
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
SQL> @sys_context_test
Testing sys_context
Elapsed: 00:00:07.31
Testing mystat
Elapsed: 00:00:38.57
SQL>
I suspect that in the past SYS_CONTEXT issued recursive SQL under the covers (just as the SYSDATE pl/sql functions used to, and the USER function and the 11g assignment from a sequence still do).
Now I assume SYS_CONTEXT gets its information directly.