Satya's blog - 2012/
CLUI: Command line user interface. One of those comamnd-line based
interfaces, usually built with ncurses, that provides a simplified GUI so
you don't have to type actual commands. Example: aptitude, midnight
commander, pine, mutt.
My desk, running Windows, Mac, and Linux all at the same time on 3 different computers:
Clay Allsop says having a startup means one of: you've raised money, are bringing in substantial revenue, or have a sizable active user base. I have a side-project, a web application to manage the academic data for students of speech pathology. I'm a contractor, but in Silicon Valley terms, I'm the tech co-founder and probably CTO. We've been around since 2005, in one form or another. In about 2009, we spun-off from the institution where the project started, and started taking on paying clients. Right now, we have 32 institutions that are using the product, with about 8 more in the wings. Each institution has anything from 10 to 100 students. There's our sizable active user base. We don't have substantial revenue by startup standards , but it is comfortable for a side-project. We haven't raised, nor attempted to raise, any funding. We're just going by our day-job paychecks, savings, and revenue. So by Clay's definition, it's a startup. It's not successful (yet), since it can't be all our primary jobs and pay all our salaries. It can be successful, though, because between the founders we have considerable domain knowledge, technical knowledge, and domain contacts. The business person is a speech pathologist and former professor, which is how this thing got started. Knowing exactly what the customer wants is *gold*. We don't need to advertise. The user base, speech pathology faculty, are a close-knit bunch (apparently) who love what we have so far. And what we have is a fully functional, post-MVP (Minimum Viable Product, i.e. the least number of features you can release that forms something usable), product. Do I have a startup? Yes. Is it viable? Probably. Can I get funding? Don't know.
I used these instructions to migrate some svn repositories to git:
Retrieve a list of all Subversion committers: svn log -q | \ awk -F '|' '/^r/ {sub("^ ", "", $2); sub(" $", "", $2); print $2" = "$2" <"$2">"}' \ | sort -u > authors-transform.txt Edit that file as appropriate. Clone the Subversion repository using git-svn: Here, I didn't use the --stdlayout argument: git svn clone [SVN repo URL] --no-metadata -A authors-transform.txt ~/temp Convert svn:ignore properties to .gitignore: Not required if you didn't have any svn:ignore properties cd ~/temp git svn show-ignore > .gitignore git add .gitignore git commit -m 'Convert svn:ignore properties to .gitignore.' Push repository to a bare git repository: I didn't link HEAD to trunk, I linked it to git-svn: git init --bare ~/new-bare.git cd ~/new-bare.git git symbolic-ref HEAD refs/heads/git-svn cd ~/temp git remote add bare ~/new-bare.git git config remote.bare.push 'refs/remotes/*:refs/heads/*' git push bare Then delete ~/temp Rename git-svn to master (instead of renaming trunk): cd ~/new-bare.git git branch -m git-svn master And, since I had no branches, just rename/move new-bare.git to the final resting place, and then clone it out.
At http://www.codinghorror.com/blog/2012/07/new-programming-jargon.html there is some talk of "Yoda Conditions". I sent an email to someone explaining what that is, as follows: So, Yoda speaks "backwards", right? In many of the languages in common use, "=" is for assigning values to a variable and "==" is for comparison. A common bug used to be assignment instead of comparing. Usually assignment evaluates as "true" in the comparison context. Suppose some variable 'i' is 3. "if i=5" will set i to 5 *and* cause the program to think that the condition has come about. But i was actually 3 and would have failed comparison when correctly written as "if i==5". The code will execute fine, but not produce correct results. So some IDEs (code editors) warn you that you're assigning when you should be comparing. To help you when not using those IDEs (which is often), some programmers write "if 5==i" which is a perfectly valid comparison. If typed as "if 5=i" it will NOT COMPILE and thus will never execute and will never be subtly wrong. Spectacular failure (crash, or 500 internal server error) is better than subtle failure (grade is B- where it should be B, or vice versa) because spectacular failure is immediately noticeable and doesn't lead you to think you have the right results when you don't.
(See old article http://www.thesatya.com/blog/2011/05/rails3_upgrade.html ) sudo apt-get install git-core curl libmysql-ruby sudo apt-get install libmysqlclient-dev libsqlite3-dev Install rvm from the rvm website, usually with something like: curl -L get.rvm.io | bash -s stablealso add to your .bashrc or equivalent: [[ -s ~/.rvm/scripts/rvm ]] && source ~/.rvm/scripts/rvm Install pre-requisites, and ruby, and set the default ruby: rvm pkg install zlib rvm install 1.9.2 #or 1.9.3 rvm default 1.9.2 Then you can put .rvmrc files in your project directory (or a new directory) containing something like: rvm use ruby-1.9.2-p180@my_gemset_name --create Then you can set up a Gemfile, and then run: gem install bundler bundle install
In response to a post at http://37signals.com/svn/posts/3159-testing-like-the-tsa Don't aim for 100% coverage. Nope, please test as much as you can. It's ok to break out things like: function goto(url) { window.location = url; } and not test the goto function, though. Also, don't test other people's code. Code-to-test ratios above 1:2 is a smell, above 1:3 is a stink. Perhaps. but what's the metric? I hope it's not LOC. You're probably doing it wrong if testing is taking more than 1/3 of your time. You're definitely doing it wrong if it's taking up more than half. I think we do this sometimes. If we can't imagine how to test something, I think that's a smell of testing other people's code, or testing too granularly. Perhaps an integration-style test, or a behavior-driven test, would work better. Do we really need to test that this method sets the right call stack and calls the right 45 methods in ActiveRecord? Or do we want to start with a CSV file and empty DB, run the method under test, and at the end have those records in the DB? Don't test standard Active Record associations, validations, or scopes. Don't test other people's code, unless incidentally in a behavior-driven test. Reserve integration testing for issues arising from the integration of separate elements (aka don't integration test things that can be unit tested instead). Sure, unless you're behavior-driving. Don't use Cucumber unless you live in the magic kingdom of non-programmers-writing-tests (and send me a bottle of fairy dust if you're there!) Cucumber is non-trivial. Don't force yourself to test-first every controller, model, and view (my ratio is typically 20% test-first, 80% test-after). Well, depends. If you test-first, you get better coverage and leaner code (code that does only what the test calls for). Behavior-driven development (BDD) is correlated with test-first and better coverage. The 37signals blog then quoted Kent Beck: "I get paid for code that works, not for tests, " So true. Last updated: Apr 14 2012 23:04
I work in an agile environment. We have daily standups, weekly iterations, no sprints (as far as I can tell or care). We churn through stories in our story tracker. We work towards a release marker. We try to do acceptance testing early and often. We have a separate and powerful (as in, they can reject stories and we must fix) QA department. We also have a separate Product Lead, and she determines the features. We front-load technical debt by having stories near the top of the backlog/queue for things like "there must be a QA server", "there must be a test harness page in the app". We pair all day, though the pairs can be loose. We rotate pairs frequently, the nominal rate being once a day. Pairing station setup is dual keyboard/video/monitor with mirrored view. MacOSX, RubyMine. We happen to use Ruby, Padrino, some Rails, some Java, some shell. We're all polyglot, full-stack programmers comfortable between Ubuntu, Mac, command line, ruby, HTML, CSS, Javascript (though we use HAML and SCSS). That's just background. We have, in this environment, used both Pivotal Labs' Pivotal Tracker and Atlassian's JIRA, in that order. We have tried to use the same methods while using both products, modified by the product's requirements, limitations, mind-sets. What follows is my own opinion and does not reflect the company or team's. Tracker shows a unified, single view of all stories in a project. There are various queues, like current, backlog, icebox, but everyone sees the same view. Not so JIRA. We're still having arguments like "but I see this story thus," "oh but I see it this way, and oh bah...." Tracker also forces the stories to be sorted one way. Users can drag and drop stories above and below each other, very easily, and we take that to mean the priority. Can't (easily) do that with JIRA (unless our's isn't set up to do that -- and we want it to be, so I wonder why not). In JIRA, you can drag stories to change their order -- in that view only. There's some kind of disconnect between Green Hopper and Rapid Board, apparently. An upgrade may close the gap. But, it's a source of annoyance and complication, confusion and delay. Tracker's forced sorting is also useful in another way. I once used Gforge for issue tracking. The manager would set every gosh-darn bug to "top priority". That's a good way to get the developers to pick their own priority. What we really need to know is the urgency of the bugs (issues/stories) relative to themselves. The good points about JIRA, which AFAIK PTracker can't do, is things like link issues together and group issues by team. Also, JIRA can link issues across projects. It has nice dashboards, too. This issue linking thing is great for Change Management tickets. CM tickets are linked to the issues that are being promoted to production, and before we announce the change, we make sure all associated tickets are complete, and we get a nice list of changes to go in the email or whatever. Bottom line, I prefer Pivotal Tracker. JIRA is this enterprise-level thing (like most things about Java). It's a lumbering beast. Tracker feels agile, lean, and has been the only issue tracker that didn't get in my way.
A sprite sheet is a single image which has all the images used on a web page. each image on a web page has overhead of the image file format itself, plus network overhead (connection establishment, which is not a huge concern with pipelining, and extra transfer time negotiating TCP headers, HTTP headers, time for the web server to find the file.... it adds up). If done right, the single image of the sprite sheet takes fewer bytes and loads faster than multiple individual images. This works best when the individual images are smaller "accent" images, not large photos of your product. CSS is used to show the appropriate image from the sheet. Typically, the sprite sheet is placed as the background on a div which is the right size to show just the part of the sheet containing the appropriate image. This "window" is set by setting a width and height on the div, and then backgrond-position the sheet into the right place. The position is the offset of the image coordinates into the sheet, with negative signs. .some-div { width: 20px; height: 10px; background: url(/images/sprites.png); background-position: -54px -90px; } This says to show a 20x10 portion of the sprite sheet image starting at 54px X-offset into the image and 90px Y-offset. Why negative? Because background-position needs to "pull" the sprite sheet up and to the left so as to offset into it. Think of the div as a window into the sheet. Backgrond-position moves the sheet around behind the window. It does not move the window. How to make a sprite sheet from individual images? Stick your sprite candidate images into a sprites directory or whatever and do this: convert -background transparent sprites/* +append sprites.png Transparent background is a best practice, so that any background colors on your page show through and you don't get a rectangular patch. If you know what the background color is, you can use it, as transparency can actually increase the size of the final image. +append causes the end image to be horizontal which is another best practice. You can combine multiple convert commands to make a more square sprite sheet. First horizontally combine each row, then vertically combine the rows: convert -background transparent row1/* -append row1.png convert -background transparent row2/* -append row2.png convert -background transparent row1.png row2.png +append sprites.png
From http://blog.thefrontiergroup.com.au/2011/06/stubbing-out-rails-env/ To stub the rails environment for rspec-based testing, you can add this function to spec_helper.rb: def stub_env(new_env, &block) original_env = Rails.env Rails.instance_variable_set("@_env", ActiveSupport::StringInquirer.new(new_env)) block.call ensure Rails.instance_variable_set("@_env", ActiveSupport::StringInquirer.new(original_env)) end And use it like this: it "should have the correct default options" do stub_env('development') { # Rails.env.development? is now true # Do some thing here } end So stub_env gets called with what the environment should be. It saves the original environment and restores it later in the ensure block (to avoid test pollution -- other tests should not see the fake environment). It sets the requested environment and calls the block that was given to it. The key here is that Rails.env accesses an instance variable on the Rails object and that instance variable is @_env. ActiveSupport.StringInquirer is what that instance variable is supposed to contain. It's what allows us to do Rails.env.development? instead of Rails.env == 'development'. A ridiculous thing to do, in my opinion. |
|