Wednesday, September 30, 2015

Cheater's guide to recurrence

Divide and Conquer algorithms typically generate a performance function that looks something like this:


where n is the size of the input. There's a nice theorem for generating closed form solutions to these. However, if you start messing with the recurrence a bit, things can get ugly quickly. For example, in our last homework we had:


No theorem for this guy. So, the next best thing is to take a stab at the solution and then try to prove you're right by substituting the answer back into the recurrence. All good, but how do you guess the solution? Is it at all obvious that the above recurrence is  ? It sure wasn't to me.

The standard tactic is to build a recursion tree, figuring that a visual representation of what's going on might help steer you the right way. Again, it didn't help me much. However, recursion trees are a tool developed back in the 40's when the thought of actually computing these functions recursively was a non-starter. Now, it's easy. It took me less time to create a spreadsheet that calculated the first million values than it did to draw a useless recursion tree. Once I had the values, it was a fairly simple matter of matching the curve to various functions of n to get an asymptotic bound. Once I had the bound, I realized how I could redraw the tree to suggest such an answer (I wouldn't have bothered with that step except that the homework called for it). Sticking it into the recurrence and proving that it worked was pretty straightforward.

I think recursion trees are a useful tool for understanding the concept. They may also be of use in explaining why a solution works. But, anybody who's using them to actually solve non-trivial problems is really wasting their time.

Tuesday, September 29, 2015

More homework

Seems like we're pretty much full on now. Assignments turned in and received in both classes. These next two are easier (and due quicker). I knocked out the Languages one tonight. As usual, the Algorithms one will require more care.

Monday, September 28, 2015

CS5130 HW1

#3a was really hard; I had to resort to a little hand waving. We'll see what the prof thinks of my answer.

HW1


Friday, September 25, 2015

CS4250 HW1

It's not due until Monday but, from my page stats, it's obvious nobody's copying from here. Here's my first assignment for my languages course. The parse tree diagrams don't show up and I'm not really feeling like solving that problem right now.

CS4250: Programming Languages
HW#1
Eric Buckley
3) Rewrite the BNF of Example 3.4 to give + precedence over * and force + to be right associative.
Original BNF from 3.4:
<assign> -> <id> = <expr>
<id> -> A | B | C
<expr> -> <expr> + <term> | <term>
<term> -> <term> * <factor> | <factor>
<factor> -> (<expr>) | <id>
Switching the operators for <expr> and <term> gives + precedence over *. To make + right associative, me make sure the parse tree expands from the right rather than the left for <term>. Thus, the new BNF is:
<assign> -> <id> = <expr>
<id> -> A | B | C
<expr> -> <expr> * <term> | <term>
<term> -> < factor > + <term> | <factor>
<factor> -> (<expr>) | <id>

6) Using the grammar in Example 3.2, show a parse tree and a leftmost derivation of A=A*(B+(C*A))
BNF 3.2:
<assign> -> <id> = <expr>
<id> -> A | B | C
<expr> -> <id> + <expr> | <id> * <expr> | (<expr>) | <id>




Left derivation:
<assign> => <id> = <expr>
                => A = <expr>
                => A = <id> * <expr>
                => A = A * <expr>
                => A = A * (<expr>)
                => A = A * (<id> + <expr>)
                => A = A * (B + <expr>)
                => A = A * (B + (<expr>))
                => A = A * (B + (<id> * <expr>))
                => A = A * (B + (C * <expr>))
                => A = A * (B + (C * <id>))
                => A = A * (B + (C * A))
Parse tree:
7. Using the grammar in Example 3.4, show a parse three and a leftmost derivation for A=(A+B)*C
(See #3 for BNF)
Left derivation:
<assign> => <id> = <expr>
                => A = <expr>
                => A = <term>
                => A = <term> * <factor>
                => A = <factor> * <factor>
                => A = (<expr>) * <factor>
                => A = (<expr> + <term>) * <factor>
                => A = (<term> + <term>) * <factor>
                => A = (<factor> + <term>) * <factor>
                => A = (<id> + <term>) * <factor>
                => A = (A + <term>) * <factor>
                => A = (A + <factor>) * <factor>
                => A = (A + <id>) * <factor>
                => A = (A + B) * <factor>
                => A = (A + B) * <id>
                => A = (A + B) * C



7 (cont.) Parse Tree:




8) Prove the following grammar I ambiguous:
<S> -> <A>
<A> -> <A> + <A> | <id>
<id> -> a | b | c
The ambiguity comes from the fact that association can parse either left or right. The following two trees are both valid parsing of a + b + c:

11) Consider the grammar:
<S> -> <A> a <B> b
<A> -> <A> b | b
<B> -> a <B> | a
a) baab is valid (S => <A> a <B> b => baab
b) bbbab is not valid because the first and third rule imply that there must always be a double “a” in the string (<S> includes “a <B>” and <B> must always start with “a”).
c) bbaaaaaS is not valid because the character “S” is not in any expansion.
d) bbaab is valid (S=> <A> a <B> b => <A> ba <B> b => bbaab)



13. Grammar for strings of the form n “a” followed by n “b” where n>0.
<S> -> ab | <A>
<A> -> ab | a<A>b
14. Parse trees for the above grammar for aabb and aaaabbbb.
16. Convert the BNF from Example 3.3 to EBNF:
BNF 3.3:
<assign> -> <id> = <expr>
<id> = A | B | C
<expr> -> <expr> + <expr> | <expr> * <expr> | (<expr>) | <id>
EBNF:
                <assign> -> <id> = <expr>
                <id> = A | B | C
                <expr> -> <expr> ( + | * ) <expr> | (<expr>) | <id>



17. Convert the following EBNF to BNF:
<S> -> <A> {b <A>}
<A> -> a [b] <A>
BNF:
<S> -> <A>
<A> -> <A> b <A> | <B>
<B> -> a <B> | ab <B>
18. An Intrinsic Attribute is an attribute of a leaf node on the parse tree whose value comes from outside the parse tree (e.g., the type of a variable which is typically stored in the symbol table during compilation). Intrinsic variables are then passed to semantic functions for the leaf node to produce non-intrinsic Synthetic Attributes which are passed to the parent node, which in turn uses them as inputs for its semantic functions.



23. Compute the weakest precondition {P} for each statement and postcondition:
a)            {P} a = 2 * (b – 1) – 1 {a > 0}
                                {P}          = {2 * (b – 1) – 1 > 0}
                                                = {2b > 3}
= {b > 3/2}
b)            {P} b = (c + 10) / 3 {b > 6}
                                {P}          = {(c + 10) / 3 > 6}
                                                = {c + 10 > 18}
                                                = {c > 8}
c)            {P} a = a + 2 * b – 1 {a > 1}
                                {P}          = {a + 2 * b – 1 > 1}
                                                = {a + 2 * b > 2}
                                                = {b > (2 – a) / 2}
d)            {P} x = 2 * y + x – 1 {x > 11}
                                {P}          = {2 * y + x – 1 > 11}
                                                = {2 * y + x > 12}
                                                = {y > (12 – x)/2}



24. Compute the weakest precondition for each of the following sequences of assignment statements and postconditions:
a)            {P} a = 2 * b + 1; b = a – 3; {b < 0}
                {P} a = 2 * b + 1 {P1}, {P1} b = a – 3; {b < 0}
                                {P1}        = {a – 3 < 0}
                                                = {a < 3}
                =>           {P}          = {2 * b + 1 < 3}
                                                = {b < 1}
b)            {P} a = 3 * (2 * b + a); b = 2 * a – 1 {b > 5}
                {P} a = 3 * (2 * b + a) {P1}, {P1} b = 2 * a – 1 {b > 5}
                                {P1}        = {2 * a – 1 > 5}
                                                = {a > 3}
                =>           {P}          = {3 * (2 * b + a) > 3}
                                                = {2 * b + a > 1}

                                                = {b > (1 – a) / 2}

Thursday, September 24, 2015

Lots of data vs. Big Data

We use huge databases at work. The primary one I support is 15Tb and gets fed from one that's nearly 100Tb. But, while it certainly satisfies the first two "v's" of Big Data (volume and velocity), it doesn't really satisfy the third, variety. Our data is highly structured and homogeneous. That's why we've been able to keep it on relational platforms this long. The day has come, however, to start seriously looking at other architectures. We see year over year growth in the range of 50% which means even my "small" database will be up around 25Tb by the end of 2016. We're already seeing significant performance degradation due to volume. So, I got to spend most of this week talking with some experts in the Big Data space. Lot's of good stuff and quite relevant to my research direction. From a first blush, these architectures actually support sampling far better than traditional relational models.

Tuesday, September 22, 2015

Design patterns and semantics

Spent some time yesterday looking for research applying Axiomatic Semantics to Design Patterns. Couldn't fine any. A few vague references, but nothing of substance.  I asked my prof at tonight's class if he was aware of anybody doing that and he hadn't (though, it's not his area of research). Anyway, I spent a little time at lunch today seeing if any of the patterns had obvious semantic structure and I think this is doable. Don't want to spread myself too thin, but even if I could get two or three of the patterns done, it would be enough to start shopping the idea around for some faculty support.

Sunday, September 20, 2015

Big River 10 Mile

No time to put together a meaningful post related to studies, but I will state that I got an overall win (as in, first to cross the line; not age group) at the Big River 10 Mile run in House Springs today. For the moment, the reduction in training due to starting grad school has actually made me faster (sort of a mandatory taper). Soon that will reverse and my fitness will fall. Even now, outright wins are not common. It's entirely possible that today will be my last ever.

Friday, September 18, 2015

Total, partial, and half-ass

So, continuing on last night's idea, there's the issue of just what can and can't be proved via Axiomatic Semantics. The short answer is that the "can't prove" set contains just about everything we care about. Dijkstra's Predicate Transformer is nice from a mathematical standpoint but, as a tool, it's pretty limited because proving total correctness for most programs is a fools errand: they aren't totally correct!

Hoare's Proof System relaxes the constraint on loops having to finish, which makes it a lot easier to prove many things work and, while infinite loops do happen, they are usually pretty easy to find and fix. However, even Hoare's system becomes unwieldy once you're talking about a decent sized system. Even taken in unit-test-sized chunks, it's a lot of work.

What I'm suggesting isn't formal validation at all, but rather using the test data to suggest the implementation (rather than the other way around). This is at the heart of Test Driven Design (TDD). The problem I have with Test Driven Design (and, I don't think it's a fundamental problem, just a symptom of a relatively young paradigm) is that it ignores a lot of information in the problem statement when devising a solution. A set of inputs and assertions can be viewed as a contract, but usually these are just examples of a more complex business problem. What I think is missing is a methodical way of looking at the patterns of the test data and determining which design patterns are appropriate for the solution. Refactoring is a very big part of TDD, but it never hurts to start off on the right foot.

Thursday, September 17, 2015

Axiomatic Semantics

I may be on to something, here, or I may just be full of shit, The great thing about grad school is that it doesn't really matter that much which it is.

We're finally covering some graduate-level material in Programming Languages (only took three weeks for that to happen). Looking at using Axiomatic Semantics for formal proofs of correctness. Now, nobody who actually gets paid to write code (as opposed to people who write about the code written by people who get paid) actually wastes any time on proofs of correctness. However, I think there's some real practical value in looking at this approach.

Basically, it's finding the loosest condition on your inputs that give the desired outputs. In other words, how bad can your data be before the program simply can't handle it. That's an important thing to know. And, if you apply the principles of Axiomatic Semantics to Test Driven Design, you can fairly easily arrive at it. This is all spinning unstructured in my brain right now, but I'm going to try to formalize it a bit. I think at the very least there's a decent colloquium presentation in there, if not an outright publication.

Tuesday, September 15, 2015

TDD

I have to admit, I'm not 100% sold on Test Driven Design. I think it's as good as any other paradigm, but I'm not sure it's any better. That said, it is the bomb when dealing with problems on the order of homework assignments (which probably explains why it's THE NEXT BIG THING). I put together a few programs using TDD for my Algorithms class and it was pretty quick work. Way faster than traditional Design/Code/Test. And, that was with having the algorithms already worked out in advance.

Unfortunately, the difference  between homework assignment problems and the problems we solve in real life for six-figure salaries are quite different. I'm still willing to go that route given that my current employer has made it clear they want me to, but I'm not sure the gains are quite as obvious.

Monday, September 14, 2015

Edge cases

We were looking at Quicksort today in Algorithms. The proof that the partition step (which is the only step that actually does anything) works relied on the standard tricks for proving loop functions: establish some loop invariants for the base case and then inductively show they hold for other cases. It wasn't intended to be a super-formal proof, but it did have a pretty big hole in it. Basically, one of the steps assumed that both partitions were non-empty. That will never be the case after the first loop (since the first item can only go into one of the partitions) and it may never be the case since terminating with an empty partition is a valid end case.

Now, I know Quicksort works, and it wouldn't have been enlightening to cover extra cases to handle when only 1 partition has an item. So, I kept my mouth shut and let the class move on. However, I do think that it drives home one of the problems with formal proofs of correctness. Edge cases are the undoing of so many algorithms. Yet, it is very easy to put together a convincing proof that fails to account for them. I don't think good QA practices will be going away any time soon.

Sunday, September 13, 2015

Flash cards

I'm not having any trouble doing the problems for Analysis of Algorithms, but they are taking longer than they should. I'm just a bit rusty at spotting the patterns to simplify logarithms and polynomials. So, flashcards it is. I made up a bunch of them. I think I'll do the same for some of the more mundane stuff in Programming Languages (just because it's dull doesn't mean it won't be on a test).

You'd like to think that PhD work would be all super deep thoughts and big ideas. To some extent that's true, but you can't go there without a huge base of facts. And, at least in my experience, the best way to cram facts into your brain is flash cards.

Saturday, September 12, 2015

Sabbath II

Nice day off today. I've decided that for blog posts, I'll use the sabbath to copy some of my old race reports from the Carol's Team site (which will be decommissioned in January). As I didn't get to that today, I'll just report that I got in a decent 5-mile result this morning at the Cat Country Runs. You can check out Attackpoint if you care.

Thursday, September 10, 2015

Classroom management

One of the things they don't teach you when you're getting a PhD is how to manage a classroom. And, that's too bad, because there are a lot of really smart people out there doing an absolutely terrible job of managing university classrooms.

I'll get back to mathy-type stuff soon, I just need to rant a bit.

I won't name names, because this isn't a personal attack, it's just citing an example. Modern Programming Languages is the name of the course. It's supposed to be about the underpinnings of good language design. It's an upper division course that also serves as a core course for the grad curriculum. It seems a reasonable expectation that anybody taking such a course would already know what a compiler does.

Well, one of the students doesn't. I'd say too bad for him, but it's really too bad for the rest of us. He's derailed three of the last five lectures with questions about translation to machine code. I'm not talking about the type of stupid questions (yes, there are stupid questions) that everybody drops from time to time and can be dismissed easily by even a clumsy prof. He really wants to be spoon fed a remedial course on compiler construction.

An instructor with some decent classroom management skills would simply suggest that the questions are both background and off topic and the student should come by during office hours if they need some guidance on where to read up on this stuff. Instead, we're spending 10-15 minutes of a 75-minute lecture listening them go back and forth on questions so fundamental, I'd throw them on a test in a 200-level course just so the bottom half of the class would get something right.

I think working in industry makes you a little less tolerant of this sort of crap. Derail a meeting with a senior manager in the room and you will get your hand slapped. After a few such episodes, you just learn when to shut up and figure it out on your own time. And, you come to expect others to do likewise.

Recognizing that this student hasn't been sitting in business meetings for 25 years, I pulled him aside gently after class and told him (politely, I hope) that he needed to stop monopolizing the class, especially since the tests are going to be written by the prof of the other section, which presumably has covered a lot of relevant ground in the hour we've been talking about compilers. Fortunately, by all accounts (including several profs) this isn't a particularly tough class, so it's more bothersome than a real handicap. It is, however, really bothersome.

Wednesday, September 9, 2015

Homework

Well, looks like I'm finally going to have to do something other than show up to class. Got my first homework assignment today. It's for the Algorithms class. It's not due for two weeks and it looks pretty straightforward, so I'll just work it in with my regular study and practice. Obviously, posting the work prior to the due date would not be viewed as cool, but I will put it here once that has passed.

I've decided that, time permitting, I'm going to typeset all my assignments. I'll work them out by hand, of course, but I could use the practice putting together documents in LaTeX and it will obviously reduce chance of getting docked for a step the grader can't read. I don't know if they still teach penmanship in Engineering School. I had to learn to print very neatly as an undergrad and I got fairly good at it. That, however, was 30 years ago and my handwriting is simply awful now. I just hardly every do it.

Tuesday, September 8, 2015

Rusty

There's no getting around it: I've forgotten way too much Calculus. Need to start doing some supplemental review, at least in the areas of logarithms and derivatives, if I'm going to ace Algorithms.

Monday, September 7, 2015

Masters win

Did some problems in Komen today, but nothing worth writing about. So, I'll just state that I also ran the JCC 10K today. Came in 3rd overall and won the masters (40+). Good result for me. Not too many of those left, I imagine. Hopefully I'll run well at Milwaukee in 4 weeks. After that, who knows?

Sunday, September 6, 2015

Sabbath

My first go-round in grad school (at Cornell), I had a classmate who was an Orthodox Jew. As such, he didn't do any studying on the Sabbath. I asked him once how he managed that since the rest of us were pretty tapped out working 7 days a week. He replied that it was a blessing. He was commanded to take a day off every week. He didn't stress over it because it wasn't negotiable. As a result, he got a truly restful day every week. Needless to say, he poured everything he had into the other six. He has a PhD now. I don't.

I've loosely practiced taking a day off a week before, but I've decided I'm going to be rather strict about it while I'm pursuing my degree. I need the break. My family needs the break. I'll just have to work harder the other six days. I don't really have a problem with that.

There is still the question of which day to take off. The traditional Jewish Sabbath is sundown Friday to sundown Saturday. For Lenten observance, I've always used sundown Saturday to sundown Sunday. That could get a little messy with grad school work, though. Too much temptation to switch things around a bit in a pinch; especially if I had something important due on Monday. So, I think I'll just take Saturdays off completely. Simple, easy to enforce, and not likely to result in a missed assignment.

Saturday, September 5, 2015

LaTeX

Unfortunately, Blogger does not directly support LaTeX, which is kinda crucial for a math blog. I thought about moving to Wordpress which says they support it, but when I tried it out, I found their "support" wasn't any better than using an external tool, which is what I'm trying out now. Here's one of my favorite equations:



That was sort of a pain, since I had to switch to HTML mode to paste in the link to codecogs, but not bad. I think with some practice I can get pretty fast at it. The editor at codecogs is pretty bare bones, but it works fine and I like that the resulting link has a hover tag giving the raw LaTex (put your mouse over the equation and you can see the code I used to generate it).

By the way, the reason I like that equation so much isn't any particular fondness for the normal distribution. I just really like the derivation. You can find it in just about any Calculus text. The switch to polar coordinates is one of those little brilliancies that makes your heart flutter (assuming you're into math; otherwise it just makes your eyes glaze over).

Friday, September 4, 2015

Free stuff!

One of the few economic advantages of going back to school is that people seem much more inclined to give you stuff for free. I can get MS Office for free. I can get legal counseling for free (emotional counseling, too, but I'm hoping it doesn't come to that). There's always some free entertainment going on at campus. One of the best freebies is that I get to ride mass transit for free. Really! Not discounted; totally free. This is a big deal for someone who grew up in an area where public transportation was pretty much the only viable option for getting around.

Today, I rode in to work on the bus. The bus takes the same route I would drive and does it in about the same amount of time. Granted there is a mile on either end getting from the closest stops to my endpoints but, I can just add another mile or so in there and it counts as my morning run. It's still less efficient time-wise than driving in, but it does enable running home from work (running both ways is 17 miles, which is way more than I want to do every day).

Getting from work to school is a bit more problematic. There are multiple transfers involved. The bus route takes an hour whereas I can drive it in 30-40 minutes. At least in that case it is truly door-door service, so I could bring something to read and recover a good bit of the time. The rub would be getting from school to home. It would mean getting home pretty late though, again, I could use the time to read which is obviously not an option if I'm driving.

Thursday, September 3, 2015

Potential thesis topic

I'm meeting with my adviser today so I thought I should write down some thoughts on my research. Here's what I've put together:

I'm interested in the application of sampling theory to database queries. The success criteria for a useful method includes the following:

  • Run time must be significantly faster than simply pulling the full result directly using all rows available.
  • The result must produce not only a point estimate, but a confidence interval on that estimate. Ideally, both 1- and 2-sided intervals are supported.
The first criteria significantly reduces the applicability of pseudo-random sampling at the row level, since pulling individual random rows from a database is a relatively expensive operation. Thus, clustered sampling techniques show more promise. However, adjacent database rows tend to be highly correlated, so the use of clustered techniques widens the resulting confidence interval for a given sample set and introduces complexity into the confidence calculation itself.

Kerry and Bland (1998) indicate that "The main difficulty in calculating sample size for cluster randomized studies is obtaining an estimate of the between cluster variation or intracluster correlation." This problem has led to multi-stage sampling techniques where the inter- and intra- cluster variations are estimated at each stage and used to predict a sample size needed at the next layer of refinement. Several epidemiological studies where attaining a true random sample is prohibitive due to geographical constraints have demonstrated this technique (e.g., Galway et al, 2012)

Such methods have their detractors (e.g., Luman et al., 2007), generally claiming that clustered methods consistently overstate results when the variability between clusters is significantly higher than the variability within clusters. In response to this, the World Health Organization has layered various heuristics on top of the cluster selection method to improve performance of estimates of vaccination coverage (Burton et al. 2009). They concede, however, that reliable power estimates are problematic, if not impossible, under their methodology.

The goal of this research is to develop methods for extracting unbiased clustered samples from large databases while retaining the ability to perform power calculations. Some areas to explore are:
  • Using background processes to continually analyze the data to develop better correlation estimates which can be used by subsequent query processing to optimally select clusters.
  • Similarly, background processing can be used to extract a row-wise pseudo-random sample database that can be used for exploratory queries.
  • Using the convergence of the point estimate as an indicator for a stopping rule. That is, refusing to accept that a confidence interval has been met if the point estimate is still showing greater variability with each query iteration.
  • Using the convergence of other correlated characteristics of the data as in indicator for a stopping rule (e.g., if one was estimating the cash flows from of a group of insurance policies, one could consider the more volatile component of projected claims since convergence on that item would imply convergence on the far more stable components of premiums and expenses.
Presumably, a more thorough literature review will turn up lots of other avenues to explore as well; these are just a few things that I've thought of so far.

Wednesday, September 2, 2015

Cheating

So, today in Algorithms we looked at using recurrence relations to prove the asymptotic bounds on run time. The trick, of course, is guessing the right function to begin with. Sure, there are some patterns that are recognizable but, it seems to me that if you already have the algorithm, the simple way to generate a decent guess for the asymptote would be to simply code the algorithm and time it on a range of data sets. The shape of the curve should be pretty apparent; there aren't too many algorithms so complicated that they don't converge to their asymptotic behavior pretty quickly. Maybe mathematicians think that's cheating.

Tuesday, September 1, 2015

Publish or perish

I'm thinking I should write a paper. Publishing something would score some big points with the UMSL faculty at a time I really, really, want them to transfer a lot of credits from my Master's work. So, I'm looking at potential topics:

  • A method for pulling non-rectangular areas out of an OLAP cube. This one is the most obvious candidate because I've already done the work. I just need to write it up. My current employer can't even get mad about it because most of the research was off hours and they never implemented it (though we certainly had it ready to go). Basically, we put a superstructure on top of all the business attributes that allow us to build a hierarchy where the selection criteria at deeper levels is dependent on the selection already made at higher levels. It's actually pretty kick-ass. I have no idea why our users didn't fund the development because it's exactly what they asked for. Anyway, one of the other architects and I got far enough with the design that I could implement a rough version with probably only around 40 hours of work. That would be enough to collect some metrics on performance (to show that it's better than just running a bunch of independent queries and then trying to tie them together with PowerPivot) and write up the results.
  • The Ultrarunner's Guide to Long IT Projects. I know this sounds fluffy, but it's really not. I have a ton of experience in both of these areas and the overlap in successful strategies is no coincidence. The two disciplines require the same mindset. Not sure where I'd publish this, but I've given it a fair bit of thought and I think I could write a pretty good paper in 20-30 hours. Or, I could do a really fun interactive presentation. If I can find an appropriate forum, that might be the way to go.
  • Test Driven Design for database programming. TDD is all the rage but, if you're going to enforce the criteria that unit tests can't hit the database, you're pretty much S.O.L. trying to apply this to database programming. I've found some ways to relax that rule that don't violate the other tenets of good unit test design or of TDD in general. I could probably write this one up quicker than the other two, but I'd have to spend a good bit of time making sure that I wasn't repeating what someone else has already said. I haven't seen anything in the literature about it, but I could have easily missed it.