Really? This is what they're using at MIT? We had some Scheme programming problems for Languages so I downloaded the Scheme interpreter from MIT's Gnu site and was instantly teleported back to cranking out C code on a DECWriter in 1980. Even by 1990, this programming environment would have been considered pretty lame next to the Turbo editors Borland was churning out. By the standards of this century, it's a joke.
I realize that many good programmers prefer command-line-style interfaces. When I'm programming a lot (which doesn't happen much anymore), I can get pretty fast with them myself. However, there's still no excuse for not having good diagnostics at the ready when things go wrong. ("The object 0 is not usable in this context" is NOT a good error message). I was reminded of the original Unix C compiler which had a single error message: "Syntax Error". At least it gave you a line number.
Furthermore, we're talking undergrads, here. Yes, very bright undergrads, but students all the same. They aren't going to know the environment forwards and backwards. They're learning. An environment that gives no assistance may be the trial by fire that a place like MIT is looking for, but that's just another reason I don't recommend doing undergrad work at a blue-chip school. Save that sort of abuse for grad school when you're sufficiently hardened for it.
Well, I'm in grad school now, so I got it to work and got my programs written.
Monday, November 30, 2015
Saturday, November 28, 2015
Aslinger 12 2014
Having gained a new age group with my fiftieth birthday last year, I decided this was the year I should take a shot at the Eastern Missouri Ultra Series (EMUS) title. The six-race series is scored by total mileage covered with bonus points for age group and overall placings. The big-points race in the series is the opener, the Howard Aslinger Foundation 12/24-Hour in Cape Girardeau. To really rack up the points, one should obviously do the 24, but I didn't want to fry my legs for the two following spring races, so I entered the 12.
Prudence dictates keeping the pre-race run to something light, but I can't drive right past Hawn on a warm spring afternoon without stopping to run an old orienteering course in the woods. I walk the uphills and don't take any chances jumping over deadfall or boulders. Whatever damage is done to the legs is more than compensated by the benefits to the soul. An hour further south, I stop in Jackson to make sure Pizza Inn loses money on their buffet, check in at my hotel, and then head into Cape to pick up my number and watch the 24-hour start.
As one would expect, the SLUGs are well represented. I spend an hour catching up with Laura Range, John Poulihan, David Stores, Jan Ryerse, and Travis Reddin, who are all entered in the 24. The ever-present Jim Donahue is also on hand helping set things up. With the start approaching, I assume Race Director Bryan Kelpe has better things to do than chat with me, so I don't bug him.
Bryan devotes a good portion of his pre-race briefing to the motivation for the run: the Aslinger Foundation which assists those with disabilities in their educational goals. As I was to find during the weekend, these were no empty words. The run not only raises money ($16,000 this year), but actively involves their clientele, offering a competitive wheelchair division and also providing volunteers who push or otherwise escort non-competitive disabled entrants around the course.
24-Hour start |
Despite staying only a few minutes from the start, I get up before 5AM so I have time to eat a bit and let it settle. The few minutes becomes a few more as I have to scrape ice off my windshield. The temperature is still right around freezing as we line up for the start. Five days before the equinox, this race will be almost exactly sunup to sundown. Bryan sends us off just as the first beams break the horizon.
A group of six forms at the front. The pace is just slightly faster than I'd like; the first lap is done in 8:35. When lap two comes up in 8:20, I decide that hanging with the lead is not a priority at this point. It's time for my first walk break, anyway.
One of my projects over the winter was to read Tim Noakes' tome, The Lore of Running. It's about as thick a book a dyslexic such as me ever dares to pick up. I'm not done with the whole thing, but I did get through the theoretical section which looks at the various explanations of what really limits endurance. My approach in previous races of this length has been to try to run very close to a constant pace throughout, but I haven't had much success with that. Seems that no matter how easy I take it out, I fall off the pace at around 45 miles. The research cited by Noakes suggests that this is caused primarily by the repetitive eccentric contractions (the landing rather than the pushoff). Since you take more steps to cover the same distance at a slower pace, running slow can actually hasten rather than delay the fatigue. The suggestion is to run faster and take frequent walk breaks to allow the muscles a chance to recover. I decided to test the strategy in this race by walking for 60-90 seconds every other lap, using that time to eat or drink.
The temperature has hardly moved and a dense fog has rolled in. In some parts of the course you can barely see 50m ahead. While a bit eerie, it makes for pretty ideal running conditions and, even with the walk breaks, I get to ten miles in 90 minutes feeling fine. The fog begins to disperse around 9AM and I begin to shed layers, dropping my hat, gloves, wind shirt, and long pants next to my cooler on successive laps.
Never one to miss sneaking in some miles, Bryan is on the course and we run together for a few laps. Actually, it's rather amazing how many miles the race director has managed to sneak in without the event going to pieces. This is the fifth running of the race; he's got over 400; and I've never heard a competitor complain that something was being neglected.
I hit marathon distance at 10:56AM (still right on 9:00/mi). The temps are still very comfortable, but there is no shade anywhere on the course so I don't expect that to last. The early leaders have taken some breaks and I come up on the second place runner, now a lap behind me. He asks how far I'm going and I point to the sun and say it will all depend on how hot it gets. In September, a day like this would result in PR's, but it's been a very cold winter and none of us are prepared to run in even modest heat.
Pretty nice temps in the shade! Too bad there was no shade. |
The adjustment turns out to be a bit more severe than I had hoped. While I manage to knock out six more miles in the next hour, at 2PM I have to succumb to nausea. I walk a full lap, trying to settle my insides by sipping on a soda and eating some strawberries. I'm able to get running again and finish the second marathon (52.4 miles) at 3:10PM. It's 18 minutes slower than the first and with temps now pushing 70, I'm sure the worst is yet to come. Any hope of getting the triple done within the 12 hours (a stretch goal in the best conditions) is long gone.US 24-hour champion John Cash has come down from Washington, MO to pace David Stores and any other SLUGs that might want company. He runs with me for a few laps and asks how I'm going. I tell him I'm fine now, but am sure I'm going to have to make a pace adjustment due to the heat. It's nice to have someone of his ability to talk to. We discuss several options and I settle on staying close to 9-minute pace to half way, which will put me at just under 40 miles, and then extending the walk breaks in hopes of getting more food and fluid down (and keeping them down).
The next two hours are spent feeling sorry for myself while grinding out depressingly slow miles. Misery loves company, and I have plenty of that. Pretty much the entire field has been reduced to alternating between a walk and weak shuffle. I chat with a few other SLUGs such as Susan Kenyon (who gets her goal of 50 miles in the 12-hour) and Jessica Knopf (who is in a remarkable 5-way battle for the Women's 12-hour title; they end up steamrolling all but two of the men, finishing 3, 4, 5, 6, and 8 overall). John Poulihan is also still going, though he's given up on his 24-hour race and is instead pacing his wife, Lynn, in the 12.
One very salty hat by race's end. |
It gets me fairly far. I'm back on 10-minute pace almost immediately and I hit 64.5 miles at 5:45PM. I walk half a lap with Laura, who is in the process of gutting out the Master's win in the Women's 24 on a day when her body clearly had lesser aspirations. I run one more lap because 66 miles sounds a lot better to me than 65 and call it a day at 6:10PM. (OK, officially it's 67 laps for 65.928 miles, but I also walked 100m to my car to get my change of clothes before the course closed, so I'm calling it 66).
All's well that ends well. Three SLUG champs: Laura, Eric, Susan. Jan Ryerse also won his age group, but we missed him in the picture. |
The part of me that is never satisfied wishes I could have kept things together well enough to take a real shot at the course record of 71.6 miles, but there was no way I was doing that today. Still, my total was the third highest in the five runnings of the race, so I regard it as a solid no-excuses effort. And, after last year's frustrating string of second-place finishes, I'll take overall wins any way they come. It certainly was an ideal start to a series I've wanted to run for quite some time.
Thursday, November 26, 2015
Blame it on John
So, I'm catching up on my reading in Sebesta for Languages and he's pretty quick to lay the whole rejection of functional programming at the feet of the Von Neumann architecture. I won't reprise earlier my rant on that one, except to restate that the trends in programming over the last 20 years are pretty clear refutation of the assertion that machine efficiency is the limiting factor in language design.
So, if it's not because the hardware can't support it (it can; I've written plenty of production code in F#; it works just fine on a standard machine), what is it? Even my comments about side effects miss the mark because ALL production-ready functional languages permit side effects; they just make them a bit less convenient.
At least some of it is at the core of our own use of language as humans. Programmers are people, and when people want something done, we tend to start barking out orders. Stating an objective in terms of requirements rather than actions is just as difficult (maybe more so) for programmers as it is for anybody else. And, it's not just a failing on the part of the speaker. I recently said "Why is the garbage full?" and got an earful for being passive aggressive. From now on, I'll say "Take out the garbage!" I might even throw "dammit" in there somewhere which is pretty much the antithesis of declarative. (BTW, there is no sexism going on here. Kate and I each do plenty of housework; the garbage just happens to be on her list).
Thinking of computation as declarative really only makes sense to mathematicians. And, while I'm happy to be counted among their lot, I also realize that we comprise a tiny fraction of the human race. Stating n! as {1 if n = 1; n (n-1)! otherwise} may seem elegant to a few, but most would much rather I just told them to multiply all the numbers from 1 to n.
So, if it's not because the hardware can't support it (it can; I've written plenty of production code in F#; it works just fine on a standard machine), what is it? Even my comments about side effects miss the mark because ALL production-ready functional languages permit side effects; they just make them a bit less convenient.
At least some of it is at the core of our own use of language as humans. Programmers are people, and when people want something done, we tend to start barking out orders. Stating an objective in terms of requirements rather than actions is just as difficult (maybe more so) for programmers as it is for anybody else. And, it's not just a failing on the part of the speaker. I recently said "Why is the garbage full?" and got an earful for being passive aggressive. From now on, I'll say "Take out the garbage!" I might even throw "dammit" in there somewhere which is pretty much the antithesis of declarative. (BTW, there is no sexism going on here. Kate and I each do plenty of housework; the garbage just happens to be on her list).
Thinking of computation as declarative really only makes sense to mathematicians. And, while I'm happy to be counted among their lot, I also realize that we comprise a tiny fraction of the human race. Stating n! as {1 if n = 1; n (n-1)! otherwise} may seem elegant to a few, but most would much rather I just told them to multiply all the numbers from 1 to n.
Monday, November 23, 2015
APX Complete
This NP-complete stuff is interesting. While I got some of it 30 years ago in undergrad, this is the first time I've looked at it from a truly abstract view. I can see why the theorists like it so much. That said, I learned long ago that I am a better engineer than philosopher, so I'll try not to go too deep down the rabbit hole.
Continuing the thought of a general-purpose NP-Complete solver. What we're really looking at is the class of APX problems; the set of NP problems that can be approximated in polynomial time. Note that the approximation bound is not arbitrary. Such a problem falls into the (presumably) broader set of PTAS (Polynomial Time Approximation Scheme) which states that you can get as close as you want as long as you're OK with not being spot on. Assuming the generally held (but unproven) conjecture that P is a proper subset of NP is true, than PTAS is a subset of APX.
I'm more interested in APX. I don't mind the approximation bound being fixed some distance from perfect. However, I'd loosen the definition just a bit by relaxing the constraint of guaranteed performance. Instead, I'd like to say that for an arbitrary bound, and alpha-risk, the probability that the algorithm produces a result which deviates from the optimal solution by more than the bound is 1 minus the alpha risk.
Before even getting to the assertion, some technical details need to be worked out. First and foremost, are any of the problems I'm considering really NP-Complete? They may just be polynomial time with unacceptably large exponents. Finding a reduction from a known NP-Complete problem may not be a trivial task (and may not even exist). I'm pretty sure I can find something to work with; but it may not be the set of problems I started with.
Next is the problem of guaranteeing the alpha-risk, which is, frankly, something more suited to frequentist methods. Finding a neutral prior, or showing that a biased prior still offers proper coverage, could also get mired in unpleasant complications.
Finally there's the issue at hand: can the risk be guaranteed in polynomial time? I actually think this will be the most straightforward part of the argument. Most algorithms that aren't insane are polynomial time and proving as much isn't that difficult. I either find a method that works or I don't.
This seems like solid dissertation stuff to me. Not sure if my advisor will agree, but it's an idea I'm going to hold onto for at least a while. Plenty of time to adjust things.
Continuing the thought of a general-purpose NP-Complete solver. What we're really looking at is the class of APX problems; the set of NP problems that can be approximated in polynomial time. Note that the approximation bound is not arbitrary. Such a problem falls into the (presumably) broader set of PTAS (Polynomial Time Approximation Scheme) which states that you can get as close as you want as long as you're OK with not being spot on. Assuming the generally held (but unproven) conjecture that P is a proper subset of NP is true, than PTAS is a subset of APX.
I'm more interested in APX. I don't mind the approximation bound being fixed some distance from perfect. However, I'd loosen the definition just a bit by relaxing the constraint of guaranteed performance. Instead, I'd like to say that for an arbitrary bound, and alpha-risk, the probability that the algorithm produces a result which deviates from the optimal solution by more than the bound is 1 minus the alpha risk.
Before even getting to the assertion, some technical details need to be worked out. First and foremost, are any of the problems I'm considering really NP-Complete? They may just be polynomial time with unacceptably large exponents. Finding a reduction from a known NP-Complete problem may not be a trivial task (and may not even exist). I'm pretty sure I can find something to work with; but it may not be the set of problems I started with.
Next is the problem of guaranteeing the alpha-risk, which is, frankly, something more suited to frequentist methods. Finding a neutral prior, or showing that a biased prior still offers proper coverage, could also get mired in unpleasant complications.
Finally there's the issue at hand: can the risk be guaranteed in polynomial time? I actually think this will be the most straightforward part of the argument. Most algorithms that aren't insane are polynomial time and proving as much isn't that difficult. I either find a method that works or I don't.
This seems like solid dissertation stuff to me. Not sure if my advisor will agree, but it's an idea I'm going to hold onto for at least a while. Plenty of time to adjust things.
Sunday, November 22, 2015
Plan for the break
As an undergrad at RIT, I didn't have to deal with breaks in a semester. RIT uses a quarter system and they run a straight 10 weeks except for winter quarter, which has a 2-week break around Christmas/New Year's. I never figured out how to use that effectively so, after my second year, I worked fall/winter quarters and went to school spring/summer. This also had the effect of maximizing my income as Taylor Instrument paid for a nice holiday break.
At Cornell, there was a spring break but, as a full time grad student, I was pretty oblivious of it.
UMSL gets a fall break during Thanksgiving week. Then, we'll have the final two weeks before exams. I'd like to make the most of this, so I put together a "training plan" for the break.
Monday: Algorithms reading and a start on the final homework.
Tuesday: Finish Reading for both Algorithms and Languages.
Wednesday: AM: work on Algorithms homework. PM: Concentration drills using exercises in Algorithms text.
Thursday: Finish first draft of Algorithms homework.
Friday: AM: Concentration drills using exercises in Algorithms text. PM: Languages exercises from text.
Saturday: Off.
Sunday: AM: Check Algorithms homework. PM: Languages exercises.
It will be interesting to see how close I can stay to this. I'm really good at following physical training plans; I have less practice being disciplined about academics.
At Cornell, there was a spring break but, as a full time grad student, I was pretty oblivious of it.
UMSL gets a fall break during Thanksgiving week. Then, we'll have the final two weeks before exams. I'd like to make the most of this, so I put together a "training plan" for the break.
Monday: Algorithms reading and a start on the final homework.
Tuesday: Finish Reading for both Algorithms and Languages.
Wednesday: AM: work on Algorithms homework. PM: Concentration drills using exercises in Algorithms text.
Thursday: Finish first draft of Algorithms homework.
Friday: AM: Concentration drills using exercises in Algorithms text. PM: Languages exercises from text.
Saturday: Off.
Sunday: AM: Check Algorithms homework. PM: Languages exercises.
It will be interesting to see how close I can stay to this. I'm really good at following physical training plans; I have less practice being disciplined about academics.
Saturday, November 21, 2015
Possum Trot 2011
The Possum Trot is being resurrected this weekend. The "final" installment was in 2011. I was one of four people to attend each of the 15 runnings. I finished that string leading the "lifetime" Trot standings, having finished in the top five pretty much every year with two wins. At the time, I was happy to see it end. Not because it was a bad race, but I felt it was time to move on. I'd probably go back this weekend if my toe wasn't broken. But, it is, so I will miss it and Micheal Eglinski will pass me on the all time standings. I'm fine with that. Here's my report from Possum Trot XV.
December 11, 2011
It was the best of times, it was the worst of times...
Yes, quoting Dickens during Advent is a bit cliche, but how better to sum up the previous fourteen editions of the Possum Trot? Rather than attempt an abstract, I'll simply refer interested readers to the race report archives if they want to delve into my extremely varied fortunes in this race. Suffice it to say that there is hardly an emotion I have not experienced at some point while running a Trot.
This year's fortunes aren't looking particularly bright for a number of reasons. For starters, I'm out of shape. Not terribly so, but enough to matter. Normally the Trot is one of my last races of the year and my fitness is very near peak. Because I took my break in October this year recovering from UROC, it will be a very early season race for the Spring 2012 cycle and all I have for training is a few weeks of base mileage. I haven't spent much time on a map, either, though I'm familiar enough with Kansas City's terrain that I'm not terribly worried about navigation.
More importantly, a scheduling change puts the Trot the day after Pere Marquette. This has happened once before and the result was a pretty unpleasant race. Goat events just aren't much fun if you don't have the legs to run with the folks you normally match. And, Pere Marquette will certainly soften your legs a bit.
However, as this is the final edition of the Trot (at least under Race Director Dick Neuberger's leadership) and I've been to all them, I begrudgingly decide I'll toe the line for the finale and hope for the best. On the upside, even a mediocre performance should close the books with me in the lead on the All-Time Trot Standings.
As is our custom, Kate, Yaya, and I all head out Friday evening to Pere Marquette to spend the night at the lodge. Staying there makes race morning much less stressful and it's a really fun place to stay. Yaya still loves the giant chess pieces and Kate and I enjoy some wine and cheese sitting by the fireplace in the great hall.
The next morning I go for a sunrise run up the bluff, getting to the top just as the sun breaks the horizon. The conditions are absolutely stellar, which is actually not good for me, but I'm not upset. Mornings like this are fair compensation for getting beat by a couple extra road runners who might otherwise suffer in the ice or mud.
After a light breakfast with the family, I change into my racing gear, opting for orienteering shoes over cross country spikes because I'm still nursing some injuries on my left foot and want the extra protection. Because the entire course is trail (albeit a fairly wide one), the 650 runners are started in waves of 25, seeded by previous results (or 10K time for rookies) which start at 30-second intervals. My seed is 12, so I'm in the first group off. At 9:30, we get the horn.
As usual, I go straight to the back of the group on the first climb. I don't know what everybody else does differently for a warmup, but I can never seem to match the initial effort in this race. I don't stress over it and am pleased to get to the first split (Goat Cliff) in 6:01, a single second off my target. We continue uphill to the first water stop which comes at 12:20, well ahead of pace for breaking an hour. The footing is very good; it appears that there has been some trail grooming done over the summer. While that's great for time, it means that I'm not catching people on the descents like I usually do. By the road crossing at roughly halfway, it's clear I'm going to be well under an hour (maybe even a PR) but further back in the field than usual. I decide to keep the pace firm but controlled the rest of the way and settle for breaking the hour and keeping my position in the mid-teens, which will get me into wave 0 again next year. I finish 15th in 58:17. I fail to win my age group for the first time since 2005, finishing second to Rick Barnes. I would have liked to make it a bit closer, but I don't think even my best would have got him today. Meanwhile, I'm happy that I didn't completely torch my legs for tomorrow.
Objectively, the performance is hard to assess. With a median time of 1:23:50, the conditions were certainly fast, especially when compared to the 2010 mudfest (which had the slowest median ever by 10 minutes; despite missing the hour, I consider my 60:47 last year to be my best run at PM). However, it only a minute faster than the medians from 2007-2009, which also saw a frozen trail and little snow. And, while three age group records fell, the other 20 or so didn't. My overall place was only four spots off from the last couple years. I decide I'm happy with it and move on to thinking about the Trot.
Kate heads back home while Yaya and I join Emily Korsch (who also took second in her age group) for the trip to Kansas City. We arrive just a few minutes late for the PTOC Christmas Party at Minski's Pizza. After the party, we head to the Neuberger's for a good night's sleep. The next morning, I go for another pre-dawn run, noting that my legs are a bit stiff, but feel a lot better than the last time I pulled the PM/PT double and return to the house for hearty breakfast provided by Dick's wife, Nancy.
As we line up for the start, it is readily apparent that this is the largest field ever for a Trot. No shortage of quality, either. The clear favorite is Mark Everett (Arizona), who has won four times previously. I'm the only other former winner present, but Justin Bakken (MN), Tom Carr (TX), Tom Puzak (MN), and Michael Eglinski (KS) have all seen the podium. On the women's side, Emily will be surely challenged by Molly Moilanen and Erin Binder (both MN) and, while Sharon Crawford (CO) isn't likely to keep pace on the open terrain of Shawnee Mission Park (she is in her 60's, after all), she has more wins than anybody, so she can't be ruled out.
At the word "go" we flip our maps over and are immediately confronted with a skip decision. There are two skips allowed. Skipping 1 saves a LOT of distance (albeit, fast running) and my legs could use the break. However, I hate skipping early because it's easier to chase than to stay off the front. I confirm that none of the favorites are taking the skip and join the stampede towards 1. The leg is long enough that by the time we get there, things are already stringing out. That and the use of electronic punching (first time for the Trot) means the usual jam up at the first control is avoided. I try to stay with the lead group heading to 2, but the legs simply aren't there. By the time we enter the woods, I'm losing contact with the pack and by 3, I'm pretty much on my own.
That's always a good time to slow down a bit and get your navigation settled, but instead I boom 4, losing around 90 seconds. By the time I get to the control, another pack has come up and it's starting to look like I might get mired mid-pack. Not really the way I want to wrap up my Trot career. Fortunately, I have enough experience to not panic in situations like this. Run your own race, keep errors to a minimum, and let the length of the race work for you. If they're still ahead of you at the finish, then they deserve to be.
The next few controls go well. I take the high route through the field to 8, which I'm dismayed to find contradicts the claim in the course notes of no tall grass. It doesn't slow me down much, though. In fact, I have the second fastest split (though that must be tempered by the realization that some of the leaders skipped 7 so they didn't have a split for that leg). I lose some time leaving 10 when I can't find a good crossing of the stream.
The next six legs are running through fields. I'm clean here, but so is everybody else. I can see others ahead and behind; I don't appear to be gaining or losing much. I have no idea of where I am (given that some have skipped and some haven't, it's not particularly useful information, anyway), but I'm still feeling that I should be moving faster. Minor bobbles at 18 and 19 cost me a couple more places. Turns out I was the 14th competitor to punch 19. I don't realize it's quite that bad, but I know I'm looking at a dismal result if things don't change soon. Fortunately, things are about to change.
None of Shawnee Mission Park could be described as "technical", but at least the course is predominantly in the woods for the next ten controls. Open woods on mid-sized ridges at that; pretty much my best terrain. I'm not aware of making a conscious adjustment to pace, but pretty soon, I'm clear of the folks that had been near me in the fields. I skip 23 (obvious) and 27 (OK, but 21 would have been better). Heading to 28, I see Andrei Karpov coming back. He's usually about my speed, so that's encouraging. Unfortunately, I boom 28 (which is pretty inexcusable) when I misread which contour it's on and end up on the hillside below the control. I get back to the control just as Tom Puzak is leaving and Tom Carr is coming in. It appears I'm back in the hunt with just 3K to go!
Puzak and I both miss 30 to the right, which gives Carr a chance to catch up. The three of us punch within seconds of each other. The last 2K is back in the fields and I want to get some idea of how the other two are running, so I push a bit over the ridge to 31. Neither Tom attempts to match the surge, but they aren't dropped either. The descent to 30 is steep and loose. I take it easy since there isn't much point in risking a nasty fall to defend a lead of just a few seconds. Approaching the control, Carr blows by me at a rate that indicates he's spotted the flag. I latch on and pick it up myself moments later. Carr keeps the pressure on as we get to the fields and Puzak begins to drop back.
I'm not very optimistic about my chances against Carr in a footrace, but there's no need to give up just yet. I match his speed to 32 but give away about 10 seconds when I misread the approach and turn up the embankment too soon only to have to go down and up to cross the stream just north of the control. Tom takes the low route out of 32 whereas I go around high. I'm not sure if my route was better or if I was just pushing harder but we're back together again when we cross the bridge to 33. Tom surges again out of 33 and gets a few seconds. That expands to about 10 when we hit the trashy woods right before 34. Just as I'm thinking the gap is getting too large, Tom trips and goes down hard. He's back up right away, but it brings his lead back to just a few seconds.
I know that if Tom gets to the top of the dam ahead of me, he'll beat me in. My only chance is to take him on the steep climb up to the road. I pump as hard as I can and am surprised at how well the legs respond. Halfway up the hill, I pass Tom who takes one look at my stride and concedes. I take no chances, sprinting it all the way in to register the fastest finishing split by a healthy margin. The finish is a respectable fourth behind Bakken, Everett, and Karpov.
Missing the podium by 63 seconds is a little frustrating, but I'm really quite pleased with the result. It was much better than two years ago when I simply had nothing in the legs after running Pere Marquette and finished seventh. More than that, I'm happy that I didn't give up mentally when it appeared I was headed for a lackluster finish. I ran the last third of the course as well as I've run anything and found some speed at the end when I really had no reason to believe it was there. I've been worried that I've developed a habit of jogging races in. Finding a big push, and on a day when a big push wasn't easy to find, is a very encouraging development.
While I go home empty handed, Emily picks up her first Dead Possum. Yaya also posts a win as fastest individual around the White course. It's a happy ride back to St. Louis, made even happier by a stop at Shakespear's Pizza in Columbia. Along the way, I state that this is actually a pretty good result to go out on. It's not so bad to leave a need for redemption, but it's not great either. It's the sort of thing you can just put behind you and move on. But, before I do that, let me just give the pen back to Charles for a moment and remember the Trot in all its varied incarnations:
And now, the final edition of, Eric's Absurdly Detailed Skip Analysis
As always, this analysis is based on speeds and strategic considerations of those running near the front of the pack: specifically the top seven men. I've also included skips taken by the top two women.
1: Obviously a good skip for saving distance and highly desirable for mid-field runners since they may get guided into a few controls by the leaders as they pass later on. For the leaders, it's not such a great deal. Leaving the pack so early gives away the advantage of group navigation. In the worst case, a subsequent error results in getting caught by the group and being a skip down with nothing to show for it. At any rate none of the leaders took it and most got through 2 in under 8 minutes, so the savings can't be more than 6 minutes. There are equally good skips later in the course.
3: Another obvious skip that wasn't taken by any of the leaders for the same reasons noted above. 2-4 and 3-4 are pretty close, so the savings is 2-3 or a bit over 4 minutes.
6: Taken by Moilanen and I'm not seeing the upside. It's an early skip and, while 5-7 is a road run, you really have to hammer it to get the time savings. 5-6-7 is all white woods with less climb and only about 200m further. 5-6-7 was under 8 minutes and I don't see running the 800m on road much quicker than 4 given the hill. You still have to nav into the control, so the savings is probably around 3:30.
7: The first skip widely taken among the leaders, including both Bakken and Everett. Assuming you take the low route from 7-8 (I didn't, but the ridge top route is about the same), the savings is basically an out-and-back from the trail bend to 7: about 800m through flat, open woods or 5:00.
14: This skip was very popular with the slower runners, and rightly so. It shaves nearly a full kilometer off the course. However, all that distance is fast running through fields so the savings is only about 5 minutes for those with the legs. Among the leaders, only Everett took this and it may have been a fatal mistake. Foot speed is his greatest asset and removing some of the fastest running on the course doesn't seem to be playing to his strengths. Maybe he saw it as his best chance to separate from Bakken, who was the only person matching his pace at this point. After the skip, he was less than five minutes ahead and Bakken got that back and more by skipping 23.
15: A lot like skipping 14, except it saves less distance (about 600m). Four minutes tops, probably closer to 3. None of the leaders skipped here.
20: Obvious and OK, but not great. Saves all of 20-21 (4:00) plus about 200m of trail running. Total savings less than 5 minutes. The biggest upside is that 21 is considerably easier to spike from above as the reentrant gets very vague in the direction of 20. That said, I missed 21 left and still had the third fastest split, so it was obviously a place where you could make a mistake and recover quickly. None of the leaders skipped here, but just slightly further back, Harding skipped and it jumped him up a whopping 9 places. He only gave back four of them to the finish, so there may have been some valid tactical considerations to justify it.
21: I like this one a lot better, even though it's practically a mirror image of skipping 20. Again, all of 20-21 is saved plus about 200m. The difference here is that the 200m isn't flat trail, it's grunting up the ridge leaving 21. That raises the savings to around 5:30. For such a good skip, it wasn't very popular among the leaders. Only Karpov took it.
23: The best skip and taken by all the leaders except Everett and Moilanen. 22-24 is a no-brainer leg running right up the stream through white woods; most of the leaders were around 3:30. It's difficult to assess 22-23-24, since Everett was the only really fast runner taking it, and presumably he was running very hard here since this was do-or-die time to justify his earlier skip at 14. He ran it in 9:11. I think most of the leaders would have been closer to 10:00. At any rate, whether you call the savings 5:30 or 6:30, the bottom line is that Bakken got to 24 a minute clear and was not threatened the rest of the way.
27: I was the only leader waiting this late to skip and it wasn't really worth the wait. It was better than my splits would indicate as I boomed 28, losing about a minute there. All of 26-27 (3:30) is saved, plus a little extra distance. Maybe 4:00, but certainly not any better than that.
So, it appears that 23 and 21 are objectively best. They're nicely placed from a strategic standpoint as well. Full marks to Karpov for getting it right. Slower runners would probably do better to skip 1 or 14 instead of 21. As much fun as this post-race analysis is, the best strategy, as usual, was to pick 2 quickly and then get back to navigating. A single error would cost more than a sub-optimal skip.
December 11, 2011
It was the best of times, it was the worst of times...
Yes, quoting Dickens during Advent is a bit cliche, but how better to sum up the previous fourteen editions of the Possum Trot? Rather than attempt an abstract, I'll simply refer interested readers to the race report archives if they want to delve into my extremely varied fortunes in this race. Suffice it to say that there is hardly an emotion I have not experienced at some point while running a Trot.
This year's fortunes aren't looking particularly bright for a number of reasons. For starters, I'm out of shape. Not terribly so, but enough to matter. Normally the Trot is one of my last races of the year and my fitness is very near peak. Because I took my break in October this year recovering from UROC, it will be a very early season race for the Spring 2012 cycle and all I have for training is a few weeks of base mileage. I haven't spent much time on a map, either, though I'm familiar enough with Kansas City's terrain that I'm not terribly worried about navigation.
More importantly, a scheduling change puts the Trot the day after Pere Marquette. This has happened once before and the result was a pretty unpleasant race. Goat events just aren't much fun if you don't have the legs to run with the folks you normally match. And, Pere Marquette will certainly soften your legs a bit.
However, as this is the final edition of the Trot (at least under Race Director Dick Neuberger's leadership) and I've been to all them, I begrudgingly decide I'll toe the line for the finale and hope for the best. On the upside, even a mediocre performance should close the books with me in the lead on the All-Time Trot Standings.
As is our custom, Kate, Yaya, and I all head out Friday evening to Pere Marquette to spend the night at the lodge. Staying there makes race morning much less stressful and it's a really fun place to stay. Yaya still loves the giant chess pieces and Kate and I enjoy some wine and cheese sitting by the fireplace in the great hall.
The next morning I go for a sunrise run up the bluff, getting to the top just as the sun breaks the horizon. The conditions are absolutely stellar, which is actually not good for me, but I'm not upset. Mornings like this are fair compensation for getting beat by a couple extra road runners who might otherwise suffer in the ice or mud.
After a light breakfast with the family, I change into my racing gear, opting for orienteering shoes over cross country spikes because I'm still nursing some injuries on my left foot and want the extra protection. Because the entire course is trail (albeit a fairly wide one), the 650 runners are started in waves of 25, seeded by previous results (or 10K time for rookies) which start at 30-second intervals. My seed is 12, so I'm in the first group off. At 9:30, we get the horn.
As usual, I go straight to the back of the group on the first climb. I don't know what everybody else does differently for a warmup, but I can never seem to match the initial effort in this race. I don't stress over it and am pleased to get to the first split (Goat Cliff) in 6:01, a single second off my target. We continue uphill to the first water stop which comes at 12:20, well ahead of pace for breaking an hour. The footing is very good; it appears that there has been some trail grooming done over the summer. While that's great for time, it means that I'm not catching people on the descents like I usually do. By the road crossing at roughly halfway, it's clear I'm going to be well under an hour (maybe even a PR) but further back in the field than usual. I decide to keep the pace firm but controlled the rest of the way and settle for breaking the hour and keeping my position in the mid-teens, which will get me into wave 0 again next year. I finish 15th in 58:17. I fail to win my age group for the first time since 2005, finishing second to Rick Barnes. I would have liked to make it a bit closer, but I don't think even my best would have got him today. Meanwhile, I'm happy that I didn't completely torch my legs for tomorrow.
Objectively, the performance is hard to assess. With a median time of 1:23:50, the conditions were certainly fast, especially when compared to the 2010 mudfest (which had the slowest median ever by 10 minutes; despite missing the hour, I consider my 60:47 last year to be my best run at PM). However, it only a minute faster than the medians from 2007-2009, which also saw a frozen trail and little snow. And, while three age group records fell, the other 20 or so didn't. My overall place was only four spots off from the last couple years. I decide I'm happy with it and move on to thinking about the Trot.
Kate heads back home while Yaya and I join Emily Korsch (who also took second in her age group) for the trip to Kansas City. We arrive just a few minutes late for the PTOC Christmas Party at Minski's Pizza. After the party, we head to the Neuberger's for a good night's sleep. The next morning, I go for another pre-dawn run, noting that my legs are a bit stiff, but feel a lot better than the last time I pulled the PM/PT double and return to the house for hearty breakfast provided by Dick's wife, Nancy.
As we line up for the start, it is readily apparent that this is the largest field ever for a Trot. No shortage of quality, either. The clear favorite is Mark Everett (Arizona), who has won four times previously. I'm the only other former winner present, but Justin Bakken (MN), Tom Carr (TX), Tom Puzak (MN), and Michael Eglinski (KS) have all seen the podium. On the women's side, Emily will be surely challenged by Molly Moilanen and Erin Binder (both MN) and, while Sharon Crawford (CO) isn't likely to keep pace on the open terrain of Shawnee Mission Park (she is in her 60's, after all), she has more wins than anybody, so she can't be ruled out.
At the word "go" we flip our maps over and are immediately confronted with a skip decision. There are two skips allowed. Skipping 1 saves a LOT of distance (albeit, fast running) and my legs could use the break. However, I hate skipping early because it's easier to chase than to stay off the front. I confirm that none of the favorites are taking the skip and join the stampede towards 1. The leg is long enough that by the time we get there, things are already stringing out. That and the use of electronic punching (first time for the Trot) means the usual jam up at the first control is avoided. I try to stay with the lead group heading to 2, but the legs simply aren't there. By the time we enter the woods, I'm losing contact with the pack and by 3, I'm pretty much on my own.
That's always a good time to slow down a bit and get your navigation settled, but instead I boom 4, losing around 90 seconds. By the time I get to the control, another pack has come up and it's starting to look like I might get mired mid-pack. Not really the way I want to wrap up my Trot career. Fortunately, I have enough experience to not panic in situations like this. Run your own race, keep errors to a minimum, and let the length of the race work for you. If they're still ahead of you at the finish, then they deserve to be.
The next few controls go well. I take the high route through the field to 8, which I'm dismayed to find contradicts the claim in the course notes of no tall grass. It doesn't slow me down much, though. In fact, I have the second fastest split (though that must be tempered by the realization that some of the leaders skipped 7 so they didn't have a split for that leg). I lose some time leaving 10 when I can't find a good crossing of the stream.
The next six legs are running through fields. I'm clean here, but so is everybody else. I can see others ahead and behind; I don't appear to be gaining or losing much. I have no idea of where I am (given that some have skipped and some haven't, it's not particularly useful information, anyway), but I'm still feeling that I should be moving faster. Minor bobbles at 18 and 19 cost me a couple more places. Turns out I was the 14th competitor to punch 19. I don't realize it's quite that bad, but I know I'm looking at a dismal result if things don't change soon. Fortunately, things are about to change.
None of Shawnee Mission Park could be described as "technical", but at least the course is predominantly in the woods for the next ten controls. Open woods on mid-sized ridges at that; pretty much my best terrain. I'm not aware of making a conscious adjustment to pace, but pretty soon, I'm clear of the folks that had been near me in the fields. I skip 23 (obvious) and 27 (OK, but 21 would have been better). Heading to 28, I see Andrei Karpov coming back. He's usually about my speed, so that's encouraging. Unfortunately, I boom 28 (which is pretty inexcusable) when I misread which contour it's on and end up on the hillside below the control. I get back to the control just as Tom Puzak is leaving and Tom Carr is coming in. It appears I'm back in the hunt with just 3K to go!
Puzak and I both miss 30 to the right, which gives Carr a chance to catch up. The three of us punch within seconds of each other. The last 2K is back in the fields and I want to get some idea of how the other two are running, so I push a bit over the ridge to 31. Neither Tom attempts to match the surge, but they aren't dropped either. The descent to 30 is steep and loose. I take it easy since there isn't much point in risking a nasty fall to defend a lead of just a few seconds. Approaching the control, Carr blows by me at a rate that indicates he's spotted the flag. I latch on and pick it up myself moments later. Carr keeps the pressure on as we get to the fields and Puzak begins to drop back.
I'm not very optimistic about my chances against Carr in a footrace, but there's no need to give up just yet. I match his speed to 32 but give away about 10 seconds when I misread the approach and turn up the embankment too soon only to have to go down and up to cross the stream just north of the control. Tom takes the low route out of 32 whereas I go around high. I'm not sure if my route was better or if I was just pushing harder but we're back together again when we cross the bridge to 33. Tom surges again out of 33 and gets a few seconds. That expands to about 10 when we hit the trashy woods right before 34. Just as I'm thinking the gap is getting too large, Tom trips and goes down hard. He's back up right away, but it brings his lead back to just a few seconds.
I know that if Tom gets to the top of the dam ahead of me, he'll beat me in. My only chance is to take him on the steep climb up to the road. I pump as hard as I can and am surprised at how well the legs respond. Halfway up the hill, I pass Tom who takes one look at my stride and concedes. I take no chances, sprinting it all the way in to register the fastest finishing split by a healthy margin. The finish is a respectable fourth behind Bakken, Everett, and Karpov.
Missing the podium by 63 seconds is a little frustrating, but I'm really quite pleased with the result. It was much better than two years ago when I simply had nothing in the legs after running Pere Marquette and finished seventh. More than that, I'm happy that I didn't give up mentally when it appeared I was headed for a lackluster finish. I ran the last third of the course as well as I've run anything and found some speed at the end when I really had no reason to believe it was there. I've been worried that I've developed a habit of jogging races in. Finding a big push, and on a day when a big push wasn't easy to find, is a very encouraging development.
While I go home empty handed, Emily picks up her first Dead Possum. Yaya also posts a win as fastest individual around the White course. It's a happy ride back to St. Louis, made even happier by a stop at Shakespear's Pizza in Columbia. Along the way, I state that this is actually a pretty good result to go out on. It's not so bad to leave a need for redemption, but it's not great either. It's the sort of thing you can just put behind you and move on. But, before I do that, let me just give the pen back to Charles for a moment and remember the Trot in all its varied incarnations:
It was the best of times (XII - first win), it was the worst of times (V - tiny map of nasty woods; let's do 3 laps of it!), it was the age of wisdom (III - first edition of Eric's Absurdly Detailed Skip Analysis), it was the age of foolishness (II - controls WAY up in the trees!), it was the epoch of belief (IX - curse of the odd trot broken), it was the epoch of incredulity (XI - curse returns), it was the season of Light (X - blindingly bright ice), it was the season of Darkness (VIII - rain and sleet), it was the spring of hope (VI - first time staying with the leaders), it was the winter of despair (I - 3 hours of misery), we had everything before us (IV - birth of the Death Match), we had nothing before us (XIII - dead legs from Pere Marquette), we were all going direct to heaven (XIV - second win), we were all going direct the other way (VII - injured) - in short, the period was so far like the present period, that some of its noisiest authorities insisted on its being received, for good or for evil, in the superlative degree of comparison only.Thanks for everything.
And now, the final edition of, Eric's Absurdly Detailed Skip Analysis
As always, this analysis is based on speeds and strategic considerations of those running near the front of the pack: specifically the top seven men. I've also included skips taken by the top two women.
1: Obviously a good skip for saving distance and highly desirable for mid-field runners since they may get guided into a few controls by the leaders as they pass later on. For the leaders, it's not such a great deal. Leaving the pack so early gives away the advantage of group navigation. In the worst case, a subsequent error results in getting caught by the group and being a skip down with nothing to show for it. At any rate none of the leaders took it and most got through 2 in under 8 minutes, so the savings can't be more than 6 minutes. There are equally good skips later in the course.
3: Another obvious skip that wasn't taken by any of the leaders for the same reasons noted above. 2-4 and 3-4 are pretty close, so the savings is 2-3 or a bit over 4 minutes.
6: Taken by Moilanen and I'm not seeing the upside. It's an early skip and, while 5-7 is a road run, you really have to hammer it to get the time savings. 5-6-7 is all white woods with less climb and only about 200m further. 5-6-7 was under 8 minutes and I don't see running the 800m on road much quicker than 4 given the hill. You still have to nav into the control, so the savings is probably around 3:30.
7: The first skip widely taken among the leaders, including both Bakken and Everett. Assuming you take the low route from 7-8 (I didn't, but the ridge top route is about the same), the savings is basically an out-and-back from the trail bend to 7: about 800m through flat, open woods or 5:00.
14: This skip was very popular with the slower runners, and rightly so. It shaves nearly a full kilometer off the course. However, all that distance is fast running through fields so the savings is only about 5 minutes for those with the legs. Among the leaders, only Everett took this and it may have been a fatal mistake. Foot speed is his greatest asset and removing some of the fastest running on the course doesn't seem to be playing to his strengths. Maybe he saw it as his best chance to separate from Bakken, who was the only person matching his pace at this point. After the skip, he was less than five minutes ahead and Bakken got that back and more by skipping 23.
15: A lot like skipping 14, except it saves less distance (about 600m). Four minutes tops, probably closer to 3. None of the leaders skipped here.
20: Obvious and OK, but not great. Saves all of 20-21 (4:00) plus about 200m of trail running. Total savings less than 5 minutes. The biggest upside is that 21 is considerably easier to spike from above as the reentrant gets very vague in the direction of 20. That said, I missed 21 left and still had the third fastest split, so it was obviously a place where you could make a mistake and recover quickly. None of the leaders skipped here, but just slightly further back, Harding skipped and it jumped him up a whopping 9 places. He only gave back four of them to the finish, so there may have been some valid tactical considerations to justify it.
21: I like this one a lot better, even though it's practically a mirror image of skipping 20. Again, all of 20-21 is saved plus about 200m. The difference here is that the 200m isn't flat trail, it's grunting up the ridge leaving 21. That raises the savings to around 5:30. For such a good skip, it wasn't very popular among the leaders. Only Karpov took it.
23: The best skip and taken by all the leaders except Everett and Moilanen. 22-24 is a no-brainer leg running right up the stream through white woods; most of the leaders were around 3:30. It's difficult to assess 22-23-24, since Everett was the only really fast runner taking it, and presumably he was running very hard here since this was do-or-die time to justify his earlier skip at 14. He ran it in 9:11. I think most of the leaders would have been closer to 10:00. At any rate, whether you call the savings 5:30 or 6:30, the bottom line is that Bakken got to 24 a minute clear and was not threatened the rest of the way.
27: I was the only leader waiting this late to skip and it wasn't really worth the wait. It was better than my splits would indicate as I boomed 28, losing about a minute there. All of 26-27 (3:30) is saved, plus a little extra distance. Maybe 4:00, but certainly not any better than that.
So, it appears that 23 and 21 are objectively best. They're nicely placed from a strategic standpoint as well. Full marks to Karpov for getting it right. Slower runners would probably do better to skip 1 or 14 instead of 21. As much fun as this post-race analysis is, the best strategy, as usual, was to pick 2 quickly and then get back to navigating. A single error would cost more than a sub-optimal skip.
Thursday, November 19, 2015
Bayesian answer to NP Complete
So, we're spending the remainder of the Algorithms class on NP Completeness. Makes sense since that's pretty much the BIG PRIZE for algorithms folks (even though nearly all of them concede that NP Complete is almost certainly intractable). It got me to thinking, though. Suppose I looked at my problem of mining large data with confidence intervals and found a way to reduce an NP-Complete problem to that. You wouldn't have actually solved the problem, but you'd have a confidence interval and that's really good enough for most applications. The great thing about NP-Complete is that if you solve one, you pretty much solve them all aside from some messy, but otherwise straightforward transformations. So, we'd have a general purpose solver for NP Complete with a confidence bound that can either be set and still return in polynomial time OR you could bound in constant time and then just take whatever confidence interval you get. Either way, it's a usable solution that scales.
I'm sure I'm not the first person to think of this but, I don't get the impression that it's a "hot" area. Theoreticians aren't really interested in approximations and engineers would rather exploit the specifics of a problem to get the best answer given resource constraints than use a general purpose solution that's worse. It may well be that nobody's been motivated to put in the effort to make it work (and, unlike a theoretical result, something like this has to actually work or nobody gives a shit.)
Something to think about on long runs.
I'm sure I'm not the first person to think of this but, I don't get the impression that it's a "hot" area. Theoreticians aren't really interested in approximations and engineers would rather exploit the specifics of a problem to get the best answer given resource constraints than use a general purpose solution that's worse. It may well be that nobody's been motivated to put in the effort to make it work (and, unlike a theoretical result, something like this has to actually work or nobody gives a shit.)
Something to think about on long runs.
Wednesday, November 18, 2015
Open problems for Bayesians
So, catching my breath a bit this week and have had some time for a little background reading. In particular, I enjoyed this column on open problems in the Bayesian space. I particularly was intrigued by the quote:
While I've never framed it this way myself, this is a pretty good summation of where my head is going into my research. There are questions that we want to ask, data we can use to answer, and biases that come from our assumptions. But, nobody seems to concern themselves with the skewness that come from the actual mechanics of deriving the answer. Most people just assume that the effect is negligible (or simply don't acknowledge it at all). However, as the provider of such systems, I can assure you that the effect is material.
In particular, if I can't build you a system that gets you a good answer fast enough, you will use a bad answer. That's my goal in a nutshell: good (not perfect) answers given in minimal time.
Several respondents asked for a more thorough integration of computational science and statistical science, noting that the set of inferences that one can reach in any given situation are jointly a function of the model, the prior, the data and the computational resources, and wishing for more explicit management of the tradeoffs among these quantities.
While I've never framed it this way myself, this is a pretty good summation of where my head is going into my research. There are questions that we want to ask, data we can use to answer, and biases that come from our assumptions. But, nobody seems to concern themselves with the skewness that come from the actual mechanics of deriving the answer. Most people just assume that the effect is negligible (or simply don't acknowledge it at all). However, as the provider of such systems, I can assure you that the effect is material.
In particular, if I can't build you a system that gets you a good answer fast enough, you will use a bad answer. That's my goal in a nutshell: good (not perfect) answers given in minimal time.
Tuesday, November 17, 2015
The right tool for the job
Had a fun debate in Languages class this evening revolving around the undeniable fact that functional languages don't get a whole lot of use. The advocates of such languages seem convinced that this is recalcitrance on the part of the programming establishment further impaired by the fact that the underlying hardware is biased towards imperative statements (computer engineers are apparently in on the conspiracy).
I generally keep an arms distance from debates involving zealots but, since nobody was taking the pragmatic side, I decided to call shenanigans.
Lets start with the easiest part to debunk: hardware has nothing to do with it. C++, Java, and C# would never have supplanted C if fast execution was the primary concern of developers. Scripting languages are an order of magnitude slower than LISP, yet mountains of code are written in them. The truth is, with a few exceptions in the areas of systems programming and real-time apps, nobody cares about performance anymore. Just use Facebook on an iPhone4 over 3G if you don't believe me (I do this all the time, and it's sloooooooow). Users who want snappy performance are happy to pay for it by purchasing better hardware.
So that leaves us with the great mass of anti-functional programmers. Are these folks just not good at math? Do they not see the value of referential transparency? Do they not see the evil in side effects?
Whoops, ya might be onto something with that last one. I just called a function that wrote a record to a database. That's a side effect. BAD! I just wrote another that sent a message to a web service. Side effect, BAD! This one turns on your cell phone camera. Here's one that changes the thermostat setting in your house. Another alters the fuel mix in your car's intake. BAD BAD BAD!
The truth is, the vast majority of what computers actually do is side effects. Performing actual computation is maybe 5% of computing. The rest of it is just moving crap around and interacting with peripherals. Functional languages are terrible at this because THEY DON'T ALLOW SIDE EFFECTS.
I'm all for using functional languages for what they are good at: computing functions. But if I want a computer to actually do something, I tell it what to do. Which is, of course, exactly why they are called imperative languages.
I generally keep an arms distance from debates involving zealots but, since nobody was taking the pragmatic side, I decided to call shenanigans.
Lets start with the easiest part to debunk: hardware has nothing to do with it. C++, Java, and C# would never have supplanted C if fast execution was the primary concern of developers. Scripting languages are an order of magnitude slower than LISP, yet mountains of code are written in them. The truth is, with a few exceptions in the areas of systems programming and real-time apps, nobody cares about performance anymore. Just use Facebook on an iPhone4 over 3G if you don't believe me (I do this all the time, and it's sloooooooow). Users who want snappy performance are happy to pay for it by purchasing better hardware.
So that leaves us with the great mass of anti-functional programmers. Are these folks just not good at math? Do they not see the value of referential transparency? Do they not see the evil in side effects?
Whoops, ya might be onto something with that last one. I just called a function that wrote a record to a database. That's a side effect. BAD! I just wrote another that sent a message to a web service. Side effect, BAD! This one turns on your cell phone camera. Here's one that changes the thermostat setting in your house. Another alters the fuel mix in your car's intake. BAD BAD BAD!
The truth is, the vast majority of what computers actually do is side effects. Performing actual computation is maybe 5% of computing. The rest of it is just moving crap around and interacting with peripherals. Functional languages are terrible at this because THEY DON'T ALLOW SIDE EFFECTS.
I'm all for using functional languages for what they are good at: computing functions. But if I want a computer to actually do something, I tell it what to do. Which is, of course, exactly why they are called imperative languages.
Monday, November 16, 2015
Sunday, November 15, 2015
Prairie Spirit 50 2013
This week's throwback race report is from March 23, 2013.
Be Epic. Well, it's catchy. A bit vague to qualify as actual life advice and perhaps overstating the magnitude of a rails to trails run, but sometimes these prophesies fulfill themselves. My own credo is a bit more specific, but we'll get to that in a bit.
Be Epic is the tagline for Epic Ultras, brainchild of seasoned ultrarunner and race director Eric Steele. Having heard excellent reviews of some events he's directed in conjunction with other clubs, I was willing to give his first foray into "For Profit" race promotion a try. Personally, I've never understood the general bias against people who decide to put on races for a living. I'm quite happy that my accountant, attorney, physician, auto mechanic, and tree trimmer rely on me paying my bills so they can pay theirs. It's a good motivator to do the job right. I'm happy to add a race director to that list if they also provide a good event.
So, off I go to the Prairie Spirit 50, a 50-mile footrace on an old railbed in Kansas. It's my first competitive ultra in nine months and I figure, if nothing else, it will give me some sort of read on my fitness as the course is definitely PR material. The maximum grade for most railroads is around 2.5% and this one doesn't even get half that steep. The surface is the familiar crushed limestone that tops most of our converted railways in St. Louis. The only catch is the weather which is always dicey this time of year. The forecasters aren't sure what will be coming down, but they're all in agreement that we'll be getting some nasty form of rain, sleet, or snow (probably all three) with temperatures right around freezing.
Upon arriving at the start/finish town of Ottawa, I immediately check out the "trail". My experience in the Flat Five (a favorite summer run which features a couple miles on the Katy Trail in St. Charles) is that grip is a real problem on this surface. The base is too hard for even an aggressive tread to bite into, but the loose dust and gravel on top provide poor contact for a road tread. I've brought my drill and machine screws to stud my shoes. Fortunately, that turns out not to be necessary. While this trail would be no better than the Katy at 6:00/mile, my 50-mile pace is two minutes slower than that. At that speed, I find I can run the trail in road flats with almost no slipping. Just in case the adverse conditions change that, I put a fully-screwed set of trail shoes in my drop bag. I'll have the chance to switch at 16 miles going out and 34 coming back.
I'm doing this one on the cheap; taking in the provided pre-race dinner and staying at the Econo-Lodge. Both turn out to be pleasant surprises. Neither is swank, but certainly far better than one would expect for price tags of $0 and $42, respectively.
I wake the next morning to the sound of the 100-mile crowd leaving their rooms. They start at 6AM, whereas the 50 doesn't get going until 8. I have the convenient excuse of having a solo at Palm Sunday service tomorrow to get me out of the long event. I'm pretty sure it's further than I'd like to run on a railway, particularly given the meteorological conditions. That said, one couldn't ask for much better weather for the start: right at freezing with no precipitation and very little wind. Since my hotel is a block from the 3-mile mark, I wander over to the course and watch the headlamps emerge out of the darkness. I offer some encouragement: "If this was a 5K, you'd have 100 meters to go!"
It's still quite nice as we line up for the 50-mile start; the nasty stuff isn't supposed to arrive until noon. My strategy for long events is to divide the race into roughly thirds. To borrow Mr. Steele's parlance, the emphasis for the first third is "Be Smart". You can't win an ultra in the first hour, but you sure can lose one. Despite pulling back on the reigns best I can, I find myself leading the field onto the trail, which is actually a paved bike path through the town; turning to gravel as we leave at 3 miles. By the aid station at 4.5 miles, an unmanned water drop, I'm completely alone. Seven hours is a long time with no company, so I'm not at all disappointed that Tom Aten catches up when I stop for a quick pee break half a mile later. We chat as we pass the miles easily, arriving at the first full aid station (9.25 miles) at 68 minutes. That's a fair bit quicker than I was planning on, but the opening hour has been particularly favorable with temps still just above freezing, a slight tailwind, and the first three miles on pavement. Tom spends a bit more time grabbing food than I, but has no trouble catching up.
After another 45 minutes, I get my first indication that I might be overcooking things. The pace still feels right, but my right hamstring and glute are sending a few warning signs. I mention to Tom that I'm a little concerned and may have to slow it down a bit. He obliges with a slight easing. At two hours Tom casually mentions that we're coming up on the next aid station. Indeed, we do appear to be arriving at a small town, but I wasn't expecting to get here for another 10 minutes. I check my watch again. Yup, I read it right: 2:03 for the first 16 miles. This is way too fast, even in such good conditions. Grade for part 1: F.
For the middle section of the race, the catchphrase is "Be Focused". Normally, this means holding onto the pace as it starts to feel difficult. In this case, it means trying to find a new pace that won't result in complete disaster. I decide that my best bet is to find my originally intended pace (roughly 8:10, with the goal of finishing just under seven hours) and hang on as long as possible. Tom continues on at the 7:30 clip we've been running. It's difficult to voluntarily slip back from the front of the race, but it must be done. If he's got the legs to run the full 50 at this pace, then he'll win no matter what I do. My only hope is to get back on plan and hope he fades. Unfortunately, as I watch his metronomic cadence disappear into the distance (and you can see a loooong way ahead on a Kansas railroad), I note nothing that would indicate that will happen.
Now on my own, I am free to observe that the course is really quite pretty. Nothing dramatic, like the mountain ultras, but the plains do have an appeal of their own. The stark horizon of the prairie is offset by the fact that the trail itself has a fairly dense hedgerow along both sides. It's like running through the woods in a field, which makes no sense, but I'm not sure I can describe it any better. While the right half of my drivetrain is still complaining, it's not getting any worse.
Unlike the tiny villages that hosted the aid stations at 9 and 16, the turnaround town at Garnett is large enough that we are again on pavement for a mile or so coming into the halfway mark. Tom heads out from the aid station (which is inside the old train depot), just as I arrive. Rather than immediately chasing after him, I take a few minutes to make sure I get everything I need. Arriving at 3:23, I've still got a chance at breaking seven hours if I can hold myself together. At 3:27, I'm back on the trail heading home.
The first mile back is exposed to the wind that was our friend heading out. While it's not howling, it's enough to be noticeable both in effort and chill. I have a rain jacket, heavier gloves, and a waterproof hat in the back pockets of my jersey if I need them, but decide to hold off for the moment. I'm able to hold the pace, but it no longer feels like backing off; it now feels like pressing. Twenty five miles is a long way to press. And then, the climb begins.
Oh, go ahead and laugh. Yes, Kansas really is flatter than a pancake and the trail only rises 150 feet. However, a steady 1% grade into the wind for three miles is not exactly a morale booster when trying to hang onto a pace by your fingernails. By the time I get to the Richmond aid station at 34 miles, focus isn't going to cut it. It's time to move on to the final phase: "Be Tough".
Providing some motivation for that is the fact that Tom has taken a bit of a holiday at the aid station and scrambles to head out just as I am coming in. I stop for less than 20 seconds to down a couple potato chips (about all I can eat at this point anyway), refill my bottles, and head off in pursuit. In the face of weather that is clearly about to send us a bill for the nice morning, he's donned a bright red jacket, so he's easy to see. And what I see is a runner who is in absolutely no danger of being overtaken by me, despite my increasingly desperate efforts to stay in contact. He is back on his 7:30 pace and is out of sight before the first wisps of sleet arrive.
At first it's just ice pellets. Nothing more than a nuisance. In some respects, even helpful. Much as smelling salts might be offered to a fighter in the later rounds of a bout, the tiny stings help keep my mind from wandering off into a place where I no longer care enough to keep trying. I've conceded the win, and the sub-7 is slipping out of reach, but I could still get a PR and second place is better than every other place but first, so it's worth fighting for. I have no idea how those behind me are faring, but the top ten were only separated by 15 minutes or so at the turn, so it's quite possible I'll see a challenge from behind if I don't hold it together. While this is clearly not the time for heroics, it's no time to give up either.
As the storm gets thicker and wetter, I resolve to stay at something approaching a 9:00 pace. As I reenter Ottawa, it's clear that the issue is no longer in doubt. The trail is deserted each direction for as far as I can see (which is still a quarter mile, even in these conditions). Short of lying down for a nap, I'll PR and get second. I get off the gas completely, knowing that the next few days will be infinitely more pleasant if I jog easy the last few miles rather than hammering them home. By the time I get to the finish, the snow has started accumulating on the ground. The actual finish is inside the community center and the volunteers admonish me not to run into the building too quickly for fear of slipping on the wet floor. Hardly needed advice considering my current shuffle, but I do take care and cross the finish upright at 7:10:10. It's a three-minute improvement over my previous best and, despite my miscalculation in the early going, Tom (who finished in 6:58) was clearly the better runner today, so I have no regrets about the placing.
I am, however, a bit bent about the collapse. Falling apart at the end of a long run is just not the sort of mistake I typically make. Up until mile 35, I still had it in my head that Tom was the young one who was more likely to cave and I just needed to stay in it, letting age and experience work for me. Instead, it was I who faded terribly while he disappeared over the horizon. I don't want to be too down about an objectively good result, but clearly there is some work to do.
That said, how often does one get a chance to win a race outright? I've won three ultras in my life; a 50K and a couple of timed 6-hours. This would have been the longest running win of my career. As much as my early effort made me suffer through the last two hours, I would suffer through the next several years if I thought I failed to win because I didn't try. I won't lose any sleep over that because I know full well that I tried to match the lead and simply didn't have the ability to carry the pace. I'm OK with other people being better than me. I'm not OK going with down without a fight.
So, it's with generally happy thoughts that I return to St. Louis (a trip that takes almost as long as the race due to the snow). I'll concede that some of those happy thoughts are of the knowledge that I'll be home, rested, and comfortably seated in the choir loft while some of the 100-milers are still out there battling the elements. For at flat run on a railway, this was about as Epic as it gets.
Be Epic. Well, it's catchy. A bit vague to qualify as actual life advice and perhaps overstating the magnitude of a rails to trails run, but sometimes these prophesies fulfill themselves. My own credo is a bit more specific, but we'll get to that in a bit.
Be Epic is the tagline for Epic Ultras, brainchild of seasoned ultrarunner and race director Eric Steele. Having heard excellent reviews of some events he's directed in conjunction with other clubs, I was willing to give his first foray into "For Profit" race promotion a try. Personally, I've never understood the general bias against people who decide to put on races for a living. I'm quite happy that my accountant, attorney, physician, auto mechanic, and tree trimmer rely on me paying my bills so they can pay theirs. It's a good motivator to do the job right. I'm happy to add a race director to that list if they also provide a good event.
Ottawa KS: Quintessential main street |
Upon arriving at the start/finish town of Ottawa, I immediately check out the "trail". My experience in the Flat Five (a favorite summer run which features a couple miles on the Katy Trail in St. Charles) is that grip is a real problem on this surface. The base is too hard for even an aggressive tread to bite into, but the loose dust and gravel on top provide poor contact for a road tread. I've brought my drill and machine screws to stud my shoes. Fortunately, that turns out not to be necessary. While this trail would be no better than the Katy at 6:00/mile, my 50-mile pace is two minutes slower than that. At that speed, I find I can run the trail in road flats with almost no slipping. Just in case the adverse conditions change that, I put a fully-screwed set of trail shoes in my drop bag. I'll have the chance to switch at 16 miles going out and 34 coming back.
I'm doing this one on the cheap; taking in the provided pre-race dinner and staying at the Econo-Lodge. Both turn out to be pleasant surprises. Neither is swank, but certainly far better than one would expect for price tags of $0 and $42, respectively.
Actually, 97 miles to go. |
It's still quite nice as we line up for the 50-mile start; the nasty stuff isn't supposed to arrive until noon. My strategy for long events is to divide the race into roughly thirds. To borrow Mr. Steele's parlance, the emphasis for the first third is "Be Smart". You can't win an ultra in the first hour, but you sure can lose one. Despite pulling back on the reigns best I can, I find myself leading the field onto the trail, which is actually a paved bike path through the town; turning to gravel as we leave at 3 miles. By the aid station at 4.5 miles, an unmanned water drop, I'm completely alone. Seven hours is a long time with no company, so I'm not at all disappointed that Tom Aten catches up when I stop for a quick pee break half a mile later. We chat as we pass the miles easily, arriving at the first full aid station (9.25 miles) at 68 minutes. That's a fair bit quicker than I was planning on, but the opening hour has been particularly favorable with temps still just above freezing, a slight tailwind, and the first three miles on pavement. Tom spends a bit more time grabbing food than I, but has no trouble catching up.
After another 45 minutes, I get my first indication that I might be overcooking things. The pace still feels right, but my right hamstring and glute are sending a few warning signs. I mention to Tom that I'm a little concerned and may have to slow it down a bit. He obliges with a slight easing. At two hours Tom casually mentions that we're coming up on the next aid station. Indeed, we do appear to be arriving at a small town, but I wasn't expecting to get here for another 10 minutes. I check my watch again. Yup, I read it right: 2:03 for the first 16 miles. This is way too fast, even in such good conditions. Grade for part 1: F.
For the middle section of the race, the catchphrase is "Be Focused". Normally, this means holding onto the pace as it starts to feel difficult. In this case, it means trying to find a new pace that won't result in complete disaster. I decide that my best bet is to find my originally intended pace (roughly 8:10, with the goal of finishing just under seven hours) and hang on as long as possible. Tom continues on at the 7:30 clip we've been running. It's difficult to voluntarily slip back from the front of the race, but it must be done. If he's got the legs to run the full 50 at this pace, then he'll win no matter what I do. My only hope is to get back on plan and hope he fades. Unfortunately, as I watch his metronomic cadence disappear into the distance (and you can see a loooong way ahead on a Kansas railroad), I note nothing that would indicate that will happen.
Now on my own, I am free to observe that the course is really quite pretty. Nothing dramatic, like the mountain ultras, but the plains do have an appeal of their own. The stark horizon of the prairie is offset by the fact that the trail itself has a fairly dense hedgerow along both sides. It's like running through the woods in a field, which makes no sense, but I'm not sure I can describe it any better. While the right half of my drivetrain is still complaining, it's not getting any worse.
Unlike the tiny villages that hosted the aid stations at 9 and 16, the turnaround town at Garnett is large enough that we are again on pavement for a mile or so coming into the halfway mark. Tom heads out from the aid station (which is inside the old train depot), just as I arrive. Rather than immediately chasing after him, I take a few minutes to make sure I get everything I need. Arriving at 3:23, I've still got a chance at breaking seven hours if I can hold myself together. At 3:27, I'm back on the trail heading home.
The first mile back is exposed to the wind that was our friend heading out. While it's not howling, it's enough to be noticeable both in effort and chill. I have a rain jacket, heavier gloves, and a waterproof hat in the back pockets of my jersey if I need them, but decide to hold off for the moment. I'm able to hold the pace, but it no longer feels like backing off; it now feels like pressing. Twenty five miles is a long way to press. And then, the climb begins.
Oh, go ahead and laugh. Yes, Kansas really is flatter than a pancake and the trail only rises 150 feet. However, a steady 1% grade into the wind for three miles is not exactly a morale booster when trying to hang onto a pace by your fingernails. By the time I get to the Richmond aid station at 34 miles, focus isn't going to cut it. It's time to move on to the final phase: "Be Tough".
Providing some motivation for that is the fact that Tom has taken a bit of a holiday at the aid station and scrambles to head out just as I am coming in. I stop for less than 20 seconds to down a couple potato chips (about all I can eat at this point anyway), refill my bottles, and head off in pursuit. In the face of weather that is clearly about to send us a bill for the nice morning, he's donned a bright red jacket, so he's easy to see. And what I see is a runner who is in absolutely no danger of being overtaken by me, despite my increasingly desperate efforts to stay in contact. He is back on his 7:30 pace and is out of sight before the first wisps of sleet arrive.
Aid station workers bundle up. |
As the storm gets thicker and wetter, I resolve to stay at something approaching a 9:00 pace. As I reenter Ottawa, it's clear that the issue is no longer in doubt. The trail is deserted each direction for as far as I can see (which is still a quarter mile, even in these conditions). Short of lying down for a nap, I'll PR and get second. I get off the gas completely, knowing that the next few days will be infinitely more pleasant if I jog easy the last few miles rather than hammering them home. By the time I get to the finish, the snow has started accumulating on the ground. The actual finish is inside the community center and the volunteers admonish me not to run into the building too quickly for fear of slipping on the wet floor. Hardly needed advice considering my current shuffle, but I do take care and cross the finish upright at 7:10:10. It's a three-minute improvement over my previous best and, despite my miscalculation in the early going, Tom (who finished in 6:58) was clearly the better runner today, so I have no regrets about the placing.
I am, however, a bit bent about the collapse. Falling apart at the end of a long run is just not the sort of mistake I typically make. Up until mile 35, I still had it in my head that Tom was the young one who was more likely to cave and I just needed to stay in it, letting age and experience work for me. Instead, it was I who faded terribly while he disappeared over the horizon. I don't want to be too down about an objectively good result, but clearly there is some work to do.
That said, how often does one get a chance to win a race outright? I've won three ultras in my life; a 50K and a couple of timed 6-hours. This would have been the longest running win of my career. As much as my early effort made me suffer through the last two hours, I would suffer through the next several years if I thought I failed to win because I didn't try. I won't lose any sleep over that because I know full well that I tried to match the lead and simply didn't have the ability to carry the pace. I'm OK with other people being better than me. I'm not OK going with down without a fight.
So, it's with generally happy thoughts that I return to St. Louis (a trip that takes almost as long as the race due to the snow). I'll concede that some of those happy thoughts are of the knowledge that I'll be home, rested, and comfortably seated in the choir loft while some of the 100-milers are still out there battling the elements. For at flat run on a railway, this was about as Epic as it gets.
Saturday, November 14, 2015
Off weekend
Yesterday culminated a pretty tough week. 50 hours at work and 20 at school. Caught up on assignments, no tests until finals, and the big work deadline met last night (this morning, really, at 1AM when we finished reconciliation of the recalc), I am taking the whole weekend off. I may get to posting an old race report tomorrow.
Till then, a shout out to my teammates that are running Tunnel Hill today. I was signed up myself, but had to bail because I broke my toe three weeks ago (I can walk on it fine; but running 50 miles would be a bit much). Probably just as well; I don't think I would have run well after the past few weeks.
Till then, a shout out to my teammates that are running Tunnel Hill today. I was signed up myself, but had to bail because I broke my toe three weeks ago (I can walk on it fine; but running 50 miles would be a bit much). Probably just as well; I don't think I would have run well after the past few weeks.
Thursday, November 12, 2015
Languages HW4
The format is a bit of a mess, but I just finished the second exam (which seemed to go fine; apparently studying does pay off) and am not really in the mood to work on cleaning it up.
CS4250: Programming Languages
HW42: Eric Buckley – 18148800
8.4) Requiring the use of unique closing reserved words
(“fi” or “end-if” for “if”, etc.) for compound statements removes ambiguity
around binding of subordinate clauses. For example, in C, the code:
if ( x == 0 )
if ( y == 0 )
{
y
= 1;
x
= 1;
}
else
y
= 3;
relies on the programmer knowing that the else will bind to
the closest open if. However, in Ruby, the statement would be written as:
if ( x = 0 ) then
if ( y = 0 ) then
y = 1
x = 1
else
y = 3
end
end
The ambiguity is resolved by the presence of the end keyword.
On the other hand, binding an else to the nearest open if is
such a common paradigm that it can be argued that forcing the use of a closing
keyword is superfluous at best and may even impede readability. Furthermore,
the gains of using the closing keyword are only realized when a compound
statement is used. Unless a language adopts the syntax (as does Ruby) that ALL
blocks are compound statements, there will still be ambiguity when a single
statement is used in the then clause. Forcing the use of compound block
structures when a simple statement is used increases line count and decreases
readability of otherwise straightforward structures.
8.9) Java’s restriction of using only Boolean statements in
control structures was a response to the fact that many C and C++ program
errors resulted from an inadvertent use of an arithmetic expression. In
particular, “comparisons” of the form:
if ( x = 0 )
which will always evaluate to false as well as setting x to
zero, when what was intended was
if ( x == 0 )
were so commonplace that experienced C programmers formed
the habit of writing such expressions as
if ( 0 == x )
simply so the compiler would catch the error if the
additional = sign was omitted.
That
said, there are legitimate reasons for the use of arithmetic expressions in
control statements, the most common being the ubiquitous string copy that
appears in nearly every C program ever written:
while (*dest++ = *src++);
This code is not only concise and well understood by any
seasoned C programmer; most optimizing compilers will reduce it to a single
assembly instruction (assuming the underlying hardware supports a string copy).
8.4Prog) The C program:
j = -3;
for (i = 0; i < 3; i++) {
switch (j + 2) {
case 3:
case 2: j--;
break;
case 0: j += 2;
break;
default: j = 0;
}
if ( j > 0 ) break;
j = 3 – i;
}
can be rewritten without goto’s or break as follows:
bool term = false;
j = -3;
for (i = 0; (i < 3) && !term; i++) {
if ( 3 == (j+2) || 2
== (j+2) )
j--;
else if ( 0 == (j+2) )
j += 2;
else
j = 0;
if ( j > 0 )
term = true;
else
j = 3 – i;
}
and even with that, if said programmer works for me, they’d
better have their résumé up to date, because I’ll fire them.
9.5) Consider the program in C syntax:
void swap (int a, int b){
int temp;
temp = a;
a = b;
b = temp;
}
void main() {
int value = 2, list[5]
= {1, 3, 5, 7, 9};
swap(value, list[0]);
swap(list[0], list[1];
swap(value,
list[value]);
a)
The values of the variables after each call when
pass by value is employed are:
value
|
list
|
note
|
2
|
{1, 3, 5, 7, 9}
|
values are switched in swap, but not
returned to main
|
2
|
{1, 3, 5, 7, 9}
|
|
2
|
{1, 3, 5, 7, 9}
|
|
b)
When pass by reference is employed
value
|
list
|
note
|
1
|
{2, 3, 5, 7, 9}
|
swap switches and returns value and
list[0]
|
1
|
{3, 2, 5, 7, 9}
|
swap switches and returns list[0] and
list[1]
|
2
|
{3, 1, 5, 7, 9}
|
despite the aliasing on the call, once
the addresses for value and list[value] (list[1] in this case) are passed,
the subprogram does not modify the addresses, just the contents of those addresses.
Therefore, swap will exchange the values of value and list[1].
|
c)
When pass by value-result is employed, the
result depends on the time of address binding
value
|
list
|
note
|
1
|
{2, 3, 5, 7, 9}
|
swap switches and returns value and
list[0]
|
1
|
{3, 2, 5, 7, 9}
|
swap switches and returns list[0] and
list[1]
|
2
|
{3, 2, 2, 7, 9}
|
if address binding is done at the time
of entry, the results are the same as call by reference. If however, address
binding is done at the time of the exit AND addresses are computed left to
right, then the return value for the second parameter (2) will be placed in
list[value] which will evaluate to list[2], giving the result indicated to
the left.
|
9.7) Consider the program in C syntax:
void fun (int first, int second) {
first += first;
second += second;
}
void main() {
int list[2] = {1, 3};
fun(list[0], list[1]);
}
a)
The resulting values of the list array when call
by value is used is {1, 3} since the modified values are not returned to main.
b)
When call by reference is used, the content of
each address is added to itself, yielding {2, 6}.
c)
When call by value-result is used, each value is
added to itself and then returned to main, again yielding {2, 6}.
9.11) The out modifier in C# allows a value set in the
called program to be returned to the calling program. This allows a method to
return more than one intrinsic-typed value (objects and arrays are always
passed by reference in C++, Java, and C#).
While Java and C++ can implement multiple return values, either by
boxing (putting an intrinsic-typed element inside an object) or, in the case of
C++ simply passing the reference, the out keyword offers a more reliable
alternative. Out parameters MUST be set by the called program (similar to
return values). Also, the referencing is somewhat safer in that the compiler
will not allow aliasing of an address that may also be accessed through another
reference, such as an element of an array or an object member. Finally, while
the program will compile, a warning will be generated if an out parameter is
used in an expression prior to having its value set.
10.9) Introducing a second static chain link which points to
the grandparent activation instance decreases non-local access by at most 50%.
In general, a depth difference of d would result in chains of length
since the grandparent chain can be
followed back through any even depth differences, and then the parent chain
used once for an odd difference. Thus
While this savings is not insignificant
when d is large, it is offset by the fact that two chain pointers need to be
maintained at program linkage, which increases the time for the call whether or
not any non-local variables are ever used in the subprogram. Furthermore, an
optimizing compiler should be able to predict how many non-local references are
needed and determine if the chain should be computed once at entry or every
time a variable is referenced. Thus, the savings is at best minimal and does
not justify the additional complexity. (And really, if speed is a concern, the
programmer shouldn’t be using such constructs to begin with).
10.11) If a static-chain process is used
to implement blocks, the activation record must contain two of the five
elements typically stored in the activation record:
·
The dynamic link, which points to the
activation record of the enclosing block or subprogram. The block itself is
considered to be one level deeper than its enclosing block/subprogram for
purposes of resolving references to variables defined outside the block.
·
Space for variables defined inside the
block
The other two elements are not necessary:
·
The static link is not necessary because
it will always be the same as the dynamic link. (Blocks are always “called”
from the block/subprogram that defines them.)
·
The return address is not needed because
the block will not return control to the point where it was entered. Instead,
control will flow out of the block either to the next statement following the
block end or to the target of a goto.
·
Storage for parameters is also not needed
because blocks do not take parameters.
Subscribe to:
Posts (Atom)