The fundamental underpinning of the math and costs under mandated universal care NOT paid for by the government is people who have continuously paid for insurance are now pooled with folks who previously rolled the dice on the costs of chronic ailments. There is no reward for having paid to not take the risk over a long period and we are now burdened directly with the cost of the losers. Now, historically the culture in this country is that buying insurance is an individual risk choice. Don’t buy fire insurance? If your house does not burn down, big win. Most people choose to not take that risk and if you have a mortgage the bank makes the safe choice for you because their experience is based on a pretty good proxy for the entire risk pool. It would be entirely possible to craft a system that rewarded people for their past contributions to the shared risk, but that was not done. In the name of helping the poor, the ACA has taken advantage of and done damage to those who have previously paid for insurance. We would be far better off to deal directly with ending poverty for those legally resident in the US with something like the Friedman-Moynihan-Nixon plan and allowing a choice of whether to buy insurance. Or, in addition to a F-M-N negative income tax plan we could just pay for “A and E” (accident and emergency) care from the federal level with a credit over time to the folks who previously paid for insurance and let insurance for chronic problems remain a choice. Of course the most expensive choice is universal coverage of both “A and E” and chronic problems. There is zero chance we can meet everyone’s every need until we routinely have “Star Trek” era technology, so don’t hold your breath, and plan for rationed care if you want that option. Just like phasing out the Ponzi scheme of social security, we should pay off our obligations to those with sunk cost in the prior system when we completely change the rules of the “game.” This just in from the feds: The reason exchange coverage in Vermont is so high is lack of competition. Sigh. No kidding.
Another day at Oracle Open World: Oaktable World “Ted” talk about why you should join IOUG in particular and relevant users groups in general by yours truly, great hacking session by Tanel, using judgment and skill in describing performance by Cary Millsap, the best session ever by Jonathan Lewis and Maria Colgan (self appraisal by JL with which I agree) back at the official conference, and then a world class reception with the folks from Delphix. A bit exhausted, which I hate to admit, but I obsess over presentations and I’ve given 3 in 3 days and I’m glad today’s was only 18 minutes!
Now that’s the summary I put up on facebook.
The day actually started with a conference with IOUG super staff person Alexis Bauer Kolak, DBTA contacts, brand new IOUG executive director Josh Berman, and a follow up with Scott McNeil and Steve Lemme of Oracle. Seems like everyone wants to cooperate to make IOUG an essential source of truth and goodness.
Then I had a wonderful chat with George Buzsaki and Kevin Hudson at their Oracle demo pod about their choice to duplicate the file structure for the middle tier for editions based continuous run patching: They were a bit surprised to hear that I endorsed this choice, as they have gotten a fair amount of negative feedback. True, you *could* concoct something like symbolic link structure to avoid duplication of files that have not changed. But that would be a lot more moving parts and a lot more things that could go wrong. Disk IS cheap, and one duplication is well worth the cost to make the process more reliable. The editions “magic” in the database which makes this possible is complicated enough, but that just cannot be simpler and still work properly.
I followed the legendary Mogens Langballe Norgaard’s “Ted” talk at Oaktable world (which was fabulous: We do not use scripts that you (Oracle) will not certify are harmless to our system, in re: license audits.) I started my talk by saying you should always make sure you do not follow Mogens on any agenda, anywhere, because you’re bound to be a letdown. I let my passion about user’s groups show a bit and that seemed to be well-received. User’s groups are a requirement to make Oracle better, make the users themselves better leverage Oracle technology, and to aggregate a strong useful message from the users to Oracle that can be heard on essential issues. It helped that I also proclaimed at the outset this was a technical content free presentation that would be as short as I could make it while still delivering the message. My friend, Kyle Hailey, said one of the best compliments I’ve ever gotten: “That was the first presentation about joining user groups I’ve ever heard that was interesting.” (Thank you, Kyle!)
Oaktable world continued after lunch with outstanding presentations from Tanel Poder, Kerry Osbourne, and Cary Millsap.
After an interlude at the beginning of the Delphix reception (Thank you, Kyle), we proceeded to the “Optimizer Boot Camp” that was billed as: Colgan vs Lewis, Oracle vs Independent, Irish vs Welsh, and woman vs man. Each giving 5 tips in alternation, Maria Colgan first and Jonathan Lewis in rejoinder. That is worth an entire blog post of its own. I’ll wait a bit to see whether someone recounts that adequately. For now I’ll just say it was brilliant from start to finish.
Then back to the Delphix reception. That also is a story for another day!
I’m pretty much a green sneaker, tree hugging conservationist. (The Nature Conservancy, Audubon, and Arbor Day get annual renewals like clockwork, I helped write and implement Scenic Road and Wetlands Preservation legislation here in Lebanon, NH in the late 1980s.) So I’m really disappointed when loss of species and habitat headlines and statistics are so oriented to shock value that my reaction is “Is there a seed of truth in this obvious attempt to mislead?” instead of concern for the subject matter. Today’s entry for my #please_read_tufte hall of shame: “…facing 50 percent drops in their numbers within seven years if the current rate of decline continues…” I’ll save you the math: that’s about 9.43 % per year. Now that is bad enough, and it avoids stirring up all manner of thoughts about “you’re lying to me somehow.” Now especially if they added some information about whether last year’s loss rate was an outlier or whether we should expect that to be about the rate for the upcoming years unless we do something. (Plenty of populations in the wild have cycles much steeper than that.) But no, all they wanted to do was publish 50% and damn the context. For someone who spends a lot of time trying to be clear and concise about the meaning of data and statistics this is really annoying – even if the underlying truth supports the claim, they sound like a vaporware sales team.
This is related to my friend Cary’s blog entry
(which I consider a classic.)
So when you post numbers and commentary about numbers, tell me something useful and succinct: Give me meaning in context, not the mathematical analog to making an ethical point by proof texting a fragment of the bible.
… and if you find yourself writing you improved performance by more than 100% make sure you’re clear that you’re talking about throughput of some transaction and not response time or be prepared to show me your time machine, ’cause without a time machine the asymptotic ceiling on response time reduction is 100%
Whether urban planning or Information Technology Systems, an outside pair of eyes might alert you to something you have gradually become blind to as it accrued. Consider these pictures: Some landscape architect planned ahead for the growth of these trees. Those curbed and slotted sections spread the load to prevent soil compaction, prevent wheeled machinery from coming too close to the tree trunk, and are easy to remove and replace. Unfortunately some time after the design and implementation, the planned replacement as the tree grew was forgotten. An at least annual review of your system implementation documents or just a look by someone from outside your shop might help you from going from the first picture to the second. We (Rightsizing, Inc.) do this sort of thing for Oracle Technology and Business Processes.
With some friends from the Netherlands and Estonia at the Hard Eight BBQ at 688 Freeport Parkway in Coppell, Texas (+1) for food and another (+1) for the Texas experience. What a fun way to decompress after a brain stuffing week at Hotsos!
An easy confusion of logic was succinctly cleared up by Toon Koppelaars on oracle-l today. It *may* be helpful to read that thread before you read this post, but I hope it stands alone.
The original poster wanted to exclude only month number 5 from year 2012, but was perplexed that
"where (month != 5 and year != 2012)"
also excluded the 5th month of 2011 and all of the data from 2012.
Toon referenced and explained from De Morgan’s Laws that “not X and not Y” is equivalent to “not (X or Y)”, which would be “not (month=5 or year=2012) in your first query.
For those who are truth table challenged, it may be helpful to consider this (regardless of the actual plan Oracle might choose) as a filter operation. So you get a row, and if it is either that nasty 5th month you apparently did not want or that nasty year 2012, throw it away.
What the poster wanted was only to omit the 5th month of 2012. That is “not (month=5 and year=2012)” as Toon so succinctly wrote.
Now I found it interesting that the 5th month of 2012 was the last month for which there were rows in the table at all. As an information processing issue, this looks to me as if you want all the data except the last month (which is perhaps still in process or as yet not audited). If that is indeed accurate, then
where year < 2012 or month < 5
would fulfill the logical requirement and this would helpfully lead to re-usable code
where year < &incomplete_year or month < &incomplete_month
which is a little more clear (if in fact my surmise of the purpose is correct which we’ll consider true for the rest of this posting.) But this still requires logic on two columns when in fact what is probably wanted is simply the data before 2012-05. (yyyy-mm format).
Now this can be rendered as
where to_date(to_char(year,'FM0000')||to_char(month,'FM00'),'yyyymm') <
which makes it clear that this is really a range query, if you can discern that fact through all the formatting used to shove the data cleanly through the to_char and to_date functions.
Now this points out one of the benefits of using time value columns in the first place, so if that is an operationally practical solution you can fix the data model. If it is not operationally practical to retool the data model, Oracle has provided us with a virtual column capability. You won’t have to see the sausage getting made in your routine reading of queries, and you can put a helpful comment on the table in case someone might want to read in text what your intention was for the meaning of the virtual column.
In this case,
alter table sometable add year_month as (to_date(to_char(year,'FM0000')||to_char(month,'FM00'),'yyyymm'));
comment on column sometable.year_month is 'year and month columns combined to produce a valid date';
does the trick the where clause becomes really simple:
where year_month < to_date('&incomplete_month','yyyymm')
This makes the meaning of the query trivial to understand, and as the volume of data grows and perhaps older data is no longer of interest predicates of the form
where year_month > to_date('&older_than_relevant','yyyymm')
and year_month < to_date('&incomplete_month','yyyymm')
where year_month between to_date('&older_than_relevant','yyyymm') and to_date('&incomplete_month','yyyymm')
if you prefer between syntax neatly does the trick.
Since you can index a virtual column and partition on it as well, this then becomes a real opportunity for reducing the amount of work the computer will need to do to answer your query. With an index and no partitioning, there is a good chance it will be cheaper to access the data you need via the index. That will depend on the details. With partitioning you may get partition pruning and the nature of the date partitioning likely means you can set it up to scan whole partitions which should dovetail nicely with parallel query and exadata optimizations. (That bit about exadata is speculation. Oracle still hasn’t sent me one to play with.)
So I’ve gone well beyond the subject of the oracle-l question which was already answered. That’s what a few friends keep telling me I should use a blog for and it is slowly sinking in.