Showing posts with label software. Show all posts
Showing posts with label software. Show all posts

Sunday, August 24, 2014

The Diagnostics Plug – a missing abstraction in most systems

Another car analogy – this time with iPhone-Support...

As the website tom's guide - tech for real life tells us

“... Every car sold in the U.S. since 1996 features a built-in engine control computer that can be accessed with the right tools. This is called On Board Diagnostics-II (OBD-II), ...”

And actually this information can be accesses all the time – even while the car is driving.

Nowadays it is actually pretty simple to access the diagnostics information from the car. All you've got to do is buy an odb-II bluetooth adapter and an application like DashCommand or Engine Link or the like and you easily can see all the information from the diagnostic sub-system of your car. Some application also allow for the extraction of the error codes and error log. If you use one of these applications you can find out what's really wrong with your car as soon as you notice that something is off – way before an appointment at the garage would be possible.

Having a running (software) system is much like having a running car - it gets harder to work on the parts when there is more load on the system. If the database load is at 80% it becomes hard to run an additional query to find out how many stale entries are in a certain table. And if the server is maxed out, it is not so easy to just run a second instance of the system just to find out if the supposedly optimized css really is optimized. Some systems (like Apples OS-X) have built in mechanics to enable the collections of diagnostic information with a relatively low penalty for the overall performance of the system. The same holds for some web-servers and database systems – but what about your business application? Is it possible to easily find out how many logical errors occurred in the last half hour? How many searches returned too many or too few results? If that isn't possible, you might not yet have had the necessity to start exploring these types of questions.

If you do have the information available it is quite often the key to effective (end probably also efficient) bugfixing and code optimization.

In his work Betrand Meyer introduces us to the concept of “design by contract” and the related concepts of pre- and postconditions as well as invariants. While Betrand Meyer originally thought of aborting the program whenever any of the expectaions (assertions) was not met, I found it a sensible approach – especially for “legacy” systems that have been running for a serious amount of time – not to stop the whole system, but only to make a “note” of the violation and make these notes available via the diagnostics port for everyone who is invested in the health of the system.

By analogy, I think it's a good idea to make the diagnostic information of you system (any system) available at runtime too.

Disclaimer: “Just” make sure that this diagnostics plug is not also providing access to internals that should not be available from the outside... like credit-card information ad the like.

Till next time
  Michael Mahlberg

Sunday, July 13, 2014

Testing: How to get the data into the system

Even though the correct term for a lot of the “testing” going on would be verification let‘s just stick with “testing” in the titles for the time being...

General verification workflow

The general way to verify that a piece of software does what it is meant to do seems quite simple:

  • Formulate the desired outcome for a defined series of actions
  • Put the system in a known state (or the sub-system or the “unit” – depending on your testing goal)
  • Execute the aforementioned defined actions
  • Verify that that the desired outcome is actually achieved
  • [Optional] Clean up the systems [1]

While this process sounds simple enough, there are enough pitfalls hidden in these few steps to have spawned a whole industry and produce dozens of books.

In this post I want to tackle a very specific aspect – the part where the system is put into a “known state”.

Putting the system into a known state might involve several – more or less complex – actions. Nowadays, where it's possible to automate and orchestrate the whole creation and setup of machines with tools like vagrant and puppet it is even possible to set up the whole environment programmatically.

You might not want to that for each unit test, which brings us to the question of when to setup what wich I will try to address in some future post.

The problem with the data

However big or small the test-setup is, one thing that is very hard to avoid is providing data.

The state of the system (including data) if often called a fixture and having those fixtures – known states of the system with reliable, known data – is a fundamental prerequisite for any kind of serious testing - may it be manually or automated.

For any system of significant size if there are no fixtures, there is no way to tell if the system behaves as desired.

Getting the data into the system: Some options

In general there are three ways to get the data into the system

  • Save a known state of the data and import it into the system before the tests are run.
    In this scenario the important question is “which part of the data do I load at which time“ because the tests might of course interfere with each other and probably mess up the data – especially if they fail. Consider using this approach only in conjunction with proper setups before each test, amended by assertions and backed up by “on the fly” data-generation where necessary.
  • Create the data on the fly via the means of the system.
    Typically for acceptance tests this means UI-interaction – probably not the way you want to go if you have to run hundreds of tests. Consider implementing an interface, that can be accessed programmatically from outside the system, that uses the same internal mechanisms for data creation as the rest of the software.
  • Create the data on the fly directly (via the datastore layer).
    This approach has the tempting property that it can be extremely fast and can be implementing without designing the system under test specifically for testability. The huge problem with this approach is that it duplicates knowledge (or assumptions) about the systems internal structures and concepts – a thing that we usually try to avoid. Consider just not using this approach!

So, do you actually have fixtures? And how do you get to your data?

’til next time
  Michael Mahlberg


[1]

(One can either put the effort in after the test or in the setup of the test - or split the effort between the two places, but the effort to make sure that the system is in the correct state always has to go into the setup. Cleaning up after the test can help a lot in terms of performance and ramp-up time, but it can not serve a substitute for a thorough setup.

Sunday, October 27, 2013

D is for Design ... and T for Verification

That is, if we talk about software development and the acronym TDD.
Often translated as Test Driven Development the acronym has been around since the late 1990s and especially the book by Kent Beck – rightfully at that time in my opinion – made people think of TDD as a development technique.
Now when you ask different people what "development" is, a lot of them might argue that it is about coding only – while others take a much broader view. This leads to many people (including Kent Beck if I recall correctly) pointing out that the second D in TDD is very strongly about the design aspect of development.
Since I came into closer contact with some of the proponents of Exploratory Testing (ET) a couple of years back I can't help but wonder if the whole term is misleading. Of course it is about a certain aspect of testing, but those engineers who "really" do testing in the hardware sector (e.g.: with cars, planes, elevators etc.) would consider such tests only as checks or verifications, which don't require a specialist in testing to perform them. After all, everything that is done in these "tests" is to check whether a certain assumption by the developer is met by the system. (Or if the axle distance is really what the designer specified, or if the elevator cable really is capable of holding the specified weight, or... you get the picture)

The (hardware) testers I know on the other hand do something different – and usually only leave a pile of scrap metal when they are through with their tests. They test how much weight is necessary to break the elevator cable, at which speed or lateral acceleration the torsion changes the distance of the axles (which usually doesn't bear to well with the car) and so on.

And TDD simply doesn't give us that kind of tests. Those tests that look for the unexpected or un-specified. And that is is why we still need a lot of non-automated (e.g. exploratory) testing.

So please bear in mind that in a true cross-functional team TDD has it's place in development but so does true testing know-how besides TDD.

Untill next time
  Michael Mahlberg

Friday, March 11, 2011

Digital Taskboards I'm aware of as of 2011-03-11

At the Agile London usergroup meetup last night the topic of digital taskboards came up and I promised one of the participants to send a link to /one/ of them… since I couldn't remember which one I was referring to (it was kanbanery, I found my note eventually) I went through the list in my head and realised that the list isn't so short any more…

  • Qanban…a very promising (and free as in speech) project from @xlson which unfortunately doesn't undergo much development right now. But… it's open source, the source is on github - you know what to do!

  • Kanbanery…got a lot of mentions lately on twitter, but I only did a testdrive

  • Pivotal tracker…A friend of mine uses that a lot (Cheers @jcfischer) and is quite pleased with it

  • See Now Do… haven't tried it, but I know the product owner and from that I infer that it ought to be good

  • Atlassian / Jira / Greenhopper
  • … that was the one we where searching alternatives for...


@Lynne: Sorry - to many to tweet directly

Saturday, January 22, 2011

Context does matter in UX und UI design

In a recent tweet a friend of mine (Selena Delesie) stated that she is "Surprised to discover there are programmers who still don't put limits, data type restrictions, and error handling on form fields."

As much as I like twitter I soon came to realize that I couldn't put my answer in 140 characters - not even with unicode tricks.
But I'm not surprised at all - and I don't even think it's a bad thing that there are unchecked fields on forms.

As I said in my reply:
"depends on context & is a tradeoff between effort & possible harm.
user ∈ inhouse dev ⇢ lax checks"
user ∈ public ⇢ strict checks [didn't fit in the 140characters limit]

The scenario for user ∈   inhouse dev ⇢ lax checks

This would probably also be a 'C' or a very small 'D' on the cockburn scale, wich is explained in more detail at Alistair's own site.
If I build a 3-hour effort tool for me and my fellow developers to manipulate database metadata and that is intended to run only on our development machines I probably
  • won't put too much effort into checks against SQL-Injection - we can do all the harm we want anyway [and it doesn't matter much if we accidentally write "drop database;" in the database's command window or in an unchecked entry field]

  • won't put to much effort into checks against the wrong type - I'm pretty sure all of the intended users (e.g. me and three co-workers) can handle an error message like "invalid type at line 8745 in <name_of__3rd_party_sql_library>"

  • won't care if it's possible to enter 4gb of data through one of the entry field - the possible harm is well within acceptable limits for the stakeholders (me and my colleagues) and the harm could be achieved in much simpler ways

The scenario for user ∈   public ⇢ strict checks

On the other end of the spectrum would be a piece of software used by a large number of people who might or might not have have malicious intentions and where incorrect values might cause harm to serious money or even life (that would be a bigger 'D' an 'E' or an 'L' on the cockburn scale IMO.
The classic 'E' example for me is the ATM where I probably
  • would make sure that only numbers are entered by using a hardware keyboard that only consists of numbers [and check the input values to be a little bit safer with respect to physical attacks]

  • would replace drop-down boxes by large hardware buttons at the edge of the screen

  • and so forth

The 'L' examples that come to my mind are mostly related to heavy machinery or medical equipment - in both cases the aforementioned principle - limit the choices and represent data entry through manipulation of physical objects - are heavily applied.
Most application fall somewhere in between and nowadays more and more applications that deal with essential money (e.g. online banking) are realized without the hardware representation. In these cases, especially for applications running on the internet, I would
  • Put limits on the length of input data - on the client if possible and on the server just to make sure

  • Make sure the data that get's typed in represents the correct types * as early as sensible [that might be on the client - I'd add a server side check for good measure in all cases and might drop the client side check if it becomes to intrusive for the user.

  • Implement an error handling scheme that gives helpful information to the use with as much detail as he requests and informs the developers and maintainers of the system of the possible quirks in the system at the same time so that the UX can be optimized to have less occurrences of that specific error in future releases.

The problem with the two scenarios "user ∈   inhouse dev ⇢ lax checks" and "user ∈ public ⇢ strict checks"

is that most applications are somewhere between them. And sometimes application evolve from a little developer-on-the-team-only helper application to something that is used by more and more people so the last responsible moment ** to rework my three our project to a product becomes hard to determine. But if I as a developer kept my options open (e.g. because clean coding has become a second nature) it should be possible to productize the tool.
But - it has to be a conscious decision!
In my opinion it neither makes sense to make a cheap tool cat-proof nor is it responsible craftsmanship to offer a product to the public that doesn't employ proper checks.

* but that is a topic for another article - the only type that a user with a keyboard can key in is "array of characters" - what it represents is 100% context: e.g. "feed" might be a word in one case - it's the hexadecimal representation of 65261 in others.

** See e.g. Mary and Tom Popendiek on the topic of the last responsible moment and the keeping of options. Due to a fancy website design on their side I can't provide a link though - but as of 2011-01-22 the statements can be found after clicking on "focus on learning" in the principles box on the left hand side of their website.

BTW: I really wonder how this page with the "is element of" and the arrows will display on other browsers and systems...

Monday, September 29, 2008

scm & build: Levels of configuration

Here's the first little part of the (to be - or not -) series on configuration management and build management.

Although I wanted to start with some clarification on "Task level commits" I actually concentrated on different levels of configuration in a build environment. Here we go...

The levels of configuration

One of the biggest differentiators between a one-man-show and a team-effort-project are the different levels of configuration that have to be managed – and this is also a point where the quality of the whole build process can be heavily influenced.

Basically – unless the application-to-be is monolithic – there are four levels of abstraction: Machine dependent, user dependent, purpose dependent and (last but not least) project specific configurations. Each of this has to be managed separately and consciously to avoid (to much) manual intervention. Talking about indirection let me cite (once again) David Wheeler to whom the phrase “Any problem in computer science can be solved with another layer of indirection. ” is attributed. As he stated in the second part – which is often omitted – “[But] this usually leads to another problem” so let's have a look at the relative pros and cons of this fine distinction. To start of lets examine each level a bit closer.

Remarks:

By the way: Of course there are at least two Dimensions involved in this topic as well: run-time configuration and build time configuration. For the sake of this argument I'll postpone this discussion towards the [[build]] topic.

Purpose dependent configuration

Let's start with the purpose dependent configuration since this is a concern covered in most modern environments. The purpose I'm talking about is also known as build type or target environment or something similar to that. Typical purposes are “Test”, “Debug”, “Release” or – a bit less frequent – “Integration”. Depending on the purpose of the build there usually are a number of things that differ. For “Test” there might be some hard-wired shortcut to circumvent server-roundtrips or a “don't really send to printer”-entry or some other special behaviour that is meant to make testing easier (or even possible) without imposing side-effects on already installed systems. If you're building for “Debug” – one of the most commonly differentiated purposes – you'll certainly want to include debug information into your code, something you probably don't want to ship (although that could be disputed, but that is another story). “Release” of course is the purpose with which you build the shippable product once all test and QA-work has been done. The necessity of an “Integration” purpose arises only in projects where you need to integrate several sub-products and usually has rather project-specific configuration needs.

And of course there are some things (e.g. logging) that need to be configured differently for each of these levels. But speaking of logging we encounter another type of configuration that should not be mixed with the purpose specific configuration: the project specific configuration of components. While I'll go deeper into those in the next paragraph, the important part with respect to things like logging is to be aware of the fact that some thing have both – a project specific configuration and a purpose specific one. Trying to manage both in the same way can create real nightmares (I guess, everybody who has tried to keep Log4J configuration files useful for an extended period of time without that conceptual distinction knows what I'm talking about)

Project specific configuration

This usually is the first configuration option you come across. Almost any project nowadays uses some reusable libraries. Those of course have to be adapted to the specific needs of the project and thus the first level of configuration indirection comes into existence.

Although these configurations are applicable on many levels – from configuration information specifying a windows' layout to the much mentioned log-file configurations – at least they have a clear association. They are “just another kind of source code” and thus relatively easy to handle.

Machine dependent configuration

This one strikes as soon as there is even one more developer! The path which used to point at /usr/bin has to point to /usr/local/bin, the drive for intermediates that used to be C: has to be E: and the monitor resolution goes from 1024x768 to 1600x1050. Consequently some things have to be configured somehow – and here we definitely need a distinction between build-time and run-time.

User dependent configuration

The distinction between user dependent configuration and machine dependent configuration is a bit hard to make in a time where the correlation of people:machine moved from n:1 to 1:n. But even now – where lot's of people have more than one computer the real relationship is more like n:m since some computers are still shared. Especially build and integration machines are prone to sharing. Now, even on the same machine, the configuration might differ in paths, desired screen resolutions and mounted network shares, so there is basically the same set of configuration information as there is in the machine dependent part, but it needs to be managed in a separate space.

To summarize: We have the purpose specific configuration which is a central [[build]] topic, the project dependent configuration that correlates to source code, the machine dependent configuration that correlates to hardware configuration management, and the user specific configuration that somehow correlates to profile information. All of these should have traceable connections to identify possible configuration errors.

After I have raised all these questions of course I should also answer them – I'll do so some time in the future and will provide a follow-up link in this post...

I think that even the concept to have different levels of configuration enables people to create more stable build environments.

Cheers
_MM_

Monday, September 15, 2008

Build-, Version-, Configuration- and Sourcecodemanagement,

Lately I found myself talking about buildmanagement and configuration management a lot. And since this blog lies deserted in the wild anyway I think this is the perfect place to ramble about that stuff so I can point other at some more ressources than I can right now. (And of course other can point me to my own ramblings if I get lost in the discussion)

Topics I’d like to discuss (although this probably wont ever come to an end) include simple, practical, down to earth things like
  • The simplest way to set up svn to work with xcode for a small workgroup
  • How to set up (any) DVCS for XCode
  • How does SCM-integration work in XCode
but also things that seem to be intuitive to some but controversial to others
  • Command line builds
  • To branch or not to branch
and more conceptual topics (the most important to me) as
  • Releases, Versions, Variants and other “Numbers”
  • What is Multi-Dimensional SCM
  • Staging and promoting 

Stay tuned for the first episode in about a week (and nudge me if I havent published it in two weeks!)

Cheers
_MM_

Saturday, August 25, 2007

Divas and Geniuses

In a blog entry from the start of the week Mark Masterson cited from the lessons learned document of a past project:


"Change the design / architecture to reduce reliance on the divas"


That reminds me of a former client of mine who used to say
"If you've got a genius in the team - get rid of him"


At first that's rather sad, but after a while - and with changing responsibilities - I came to realise that there can be situations (actually lots of them) where the advise is absolutely sensible.



It can be sensible because most developers are of average capabilities!
After all - that is exactly what average means.
Therefore the probability to have "above average" developers declines with the size of the project (and the organisation) simply because of the definition of average.
Even if all (or at least almost all) the developers in a certain team are "above average" (e.g. compared to some outside group of reference developers) most of them will - by definition - be of average skill within the team. That's where the "get rid of the genius" sets in.



If one of the persons on the team is way ahead of all the others - lets say she is a specialist in compiler construction - their advantage can become a disadvantage for the team as a whole. For those "geniuses" TSTTMPW (The Simplest Thing That Might Possibly Work) probably is quite different from the things the rest of the team perceives as "simple".

Configuration files are a good example - while plain text with very little syntax is the "simple" thing for most of us a specialist in compiler construction wouldn't mind using a sytax that is "syntactically a little richer" to gain simpler implementation. He might end up with a configuration language like sudoers' - where guides to the grammar (defined in EBNF) and the grammar's grammar are provided in the manual pages just to give the average user a chance to understand sodoers.


Actually I worked on a compiler project myself way back in the 80ies and used to be kind of fluent in BNF, but figuring out sudoers still took me a while. And judging by the amount of

<username> ALL=(ALL) ALL

(basically: allow <username> to do everything he wants as root)


that I've seen on other peoples Macs not many of them go to great depth deciphering the format...



Back to projects: Of course the idea to get rid of every smart person in the team would not be the best option - unless you want the project to fail.

But the "geniuses" - or, to be fair: those who have advanced knowledge and/or experience compared to the rest of the team - have to be handled carefully. Only for very isolated tasks they should be left to work it out all alone.

For the rest of their work they ought to work closely together with other team members as long as their personality allows them to adopt their ideas to a level thats appropriate for the whole team. Should the latter not be an option then option one is valid again of course - the team should get rid of 'em.

But in my experience that is seldom necessary Most geniuses are quite willing to agree on a sensible level of "simple" as long as their is a sensible discussion.

So after all it no so sad anymore. The bottom line of my job is not to get the "best" possible solution but to make the team as a whole as effective as possible



_MM_

Friday, July 06, 2007

From Cavedrawings to Hyroglyphs to Times New Roman - and back to Cavedrawings.

Sometimes I don't understand our business...
Just recently I listened to an interview with Grady Booch where he (once again) emphasized that he never intended the UML to be used for programming (i.e. as a programming language).
I‘m a proponent of visual modeling myself and after experiencing the method wars of the nineties I'm glad that such a thing as the UML unifies the meaning of arrowheads, boxes and dashed lines.
But I just can't understand why people think that they will be able to describe complete software systems of all kinds in pictures (although it's quite possible for some domains and to a certain level).
When thinking of the written word and picture I just can't avoid to think about cave drawings and "real" writing.
It‘s very common to judge the development of a civilization by it‘s capabilities to write. Or as Wikipedia puts it:
Historians draw a distinction between prehistory and history, with history defined by the advent of writing. The cave paintings and petroglyphs of prehistoric peoples can be considered precursors of writing, but are not considered writing because they did not represent language directly.
So where does it put our so called "industry" when some of us attemp to describe complex systems in pictures alone?

Saturday, June 02, 2007

Are technical topics no business topics?

I started to write this about a year ago - I think it's time to finish my earlier posts to get at least the basic ideas behind them in writing before I try to start new topics.

Every now and then I come across the silly notion, that technical decisions - like "Java Yes, Ruby No" - are considered to be "not of business relevance" and are to be left to the "IT department" since "the business folks wouldn't know anyway".

Apart from the inherent hubris from "IT" people with that attitude I think this point of view is rather short-sighted. If there are implications the business is not aware of it is the solution providers responsibility to inform the business people.

But - getting back to Java vs. Ruby style questions - to build a certain application with an estimated life-span of 6 month (e.g. because there is a legal requirement for exactly that time-span) might be a sensible thing to do in language 'Y' while it may be more sensible to assign two interns to do manual data corrections than to build an application using language 'X'.

You may substitute X and Y in the above paragraph with Ruby and Java respectivly according to your personal bias (or - for that matter - with any pair of programming languages) but the business people really should have a the last word in the decision.

ceterum censeo: We (including me) should really stop using the term "IT" ... if only I knew a suitable substitute ...

Thursday, May 31, 2007

MacBooks on the Rise: Tools revisited

More and More of my friends, colleagues and acquaintances turn to Mac OS X so I think its time to revisit my list of essential tools.
The List is based on personal experiences and my starting point were:
Lets start with some picks from Stefan's list (for a detailed discusson of the applications he used to like at the time see his list):
  • Stefan's Emacs is (of course) replaced by vim/gvim on my list - unfortunately not as nicely ported as on Linux and Windows - but it still does the job for me.
  • Quicksilver definitely is a must - when working on windows one the the shortcuts I use the most is Windows-R (Run a single "Command"). Quicksilver goes far beyond that and is indispensible on a Mac when you're a keyboard addict.
  • Terminal (iTerm) omitted - no advantage for me. I'm happy with the built-in Terminal.app
  • The Omni Group's productivity suite was also very basic (not as in "simple", but as in "need to have") stuff - at least if you have to organize thought and discussions (OmniOutliner) and draw expressive diagramms and pictures effectively (OmniGraffle). The browser (OmniWeb) was a nice add-on but tends to interfere with my powersaving options so I switched back to Safari (the built-in browser). Unfortunately the productivity suite seems to be no longer available - wich makes it easier to focus on the important products.
  • Talking about browsers: Of course different browsers like Camino and Firefox come in handy if I use any Web2.0ish sites (and how could I avoid them nowadays)
  • Instead of ecto I went for Qumana for blogging - and ended up using a texteditor, and OmniOutliner for the job ;-)
  • I always preferred Addium over Proteus, but since AIM- and ICQ-user can also be reached via the built-in iChat I don't use any of them any more. For more compatibility with windows users Skype (> 2.0) comes to the rescue.
  • Ssh-keychain: well - yes i use it too.
  • And allthough I do very little development on my Mac Eclipse really does a great job for those little chunks of java that come along.
Now lets look at my ~/"Application" and the main "Application" folder.
Here we've got (noteworthy only):
  • Productivity
    • I can't work without a mindmapping tool. My Choice is Freemind - a java based application that runs nicely on Windows and OS X. And with a little finetuning it even shows the brushed metal look and feel of OS X.
    • Another "promiscuous" application as Larry Ellison might have said in the early eigthies is GanttProject which also runs on windows allmost as well as it does on a Mac. A simple but effective tool - for those situations where you just have to visualize your plans in such a way - that is written in java..
    • Every once in a while I have to make a little note. Using quicksilver I usally just start up a texteditor, make the note and save the file to the desktop. But I'm trying to change to sidenote - it would be so much easier...
    • AquaPath (again a recommendation from Stefan) to play with XPath expressions.
    • I'm quite happy with vim/gvim (I think I mentioned that already ;-) ) but sometimes a more GUI-ish editor is helpfull. That's where Smultron enters the scene - I'm not yet in the need to get Textmate.
  • Not necessary but nice
    • The MBPs are known for there "offensive" thermal behaviour. And even though I know that there is little to worry about I still like to check with Temperaturmonitor.
    • WordPod is a nice little tool to transfer textfiles to an iPod for reading - not heavy in use but usefull.
    • AudioRecorder as a lightwight alternative to firing up GarageBand just to record a few minutes. Saves directly to an AIFF, Apple Lossless (M4A and MOV), MP3, MP4 (M4A and MOV), or WAV file.
  • The office stuff
    • Pages and especially Keynote are some great alternatives to the standard MS-Office apps. So iWork was one of my first aquisitions (actually I ordered them together with my Mac)
    • But still - since most of my clients are hooked on Microsoft MS-Office is a must.
    • And for those who try to avoid the MS-Office trap and turn to OpenOffice NeoOffice is my choice of interaction.
    • Whenever I have to fiddle with graphics that I can't handle with OmniGraffle or the built-in apps (iPhoto does it for 99% of my photo editing whishes) Posterpaint and Seashore come to the rescue.
    • The essential tool for web development of course is a texteditor! ;-) But the way CSSEdit lets me fiddle with css styles is unmatched by anything I've seen before. Coda might turn out to be a great addition, but im not yet sure.
    • Being the lazy person I am I try to avoid typing as much as I can. RapidoWrite helps me by replacing abbreviation with any text I like (as long as I've defined the abbreviation of course)
  • One of the nice things about OS X is the fact that pdf is a native format. Still some tools come quite handy for some Tasks.
    • PdfMergeX gives a little more flexibility to the handling of pdfs.
    • PDFView is an alternative to the built-in Preview application and is in some ways more "natural" to the eye.
    • Although many printer drivers allready support "brochure" layout CheapImposter is my tool of choice if I want to be able to create brochures on not-so-smart printer without duplex capability.
    • With Yep! there's a great alternative to searching pdf all over my harddrive - and I really should use it more ...
  • System tools
    • SyncTwoFolders does exactly as the name implies - great for keeping USB-sticks up to date.
    • Going back in broser's cache is not always easy - but Retrospective makes it easier.
    • Even a nnn-GB drive gets stuffed after a while. With Filelight and GrandPerspective it's easy to identify the hotspots - actually I use almost exclusively GrandPerspective - Filelight is just an alternative for "special cases".
  • Connectivity
    • As always the Mac (at least mine) just works when it comes to wireless networks. But sometimes I want to now why and how. iStumbler doesn't tell me to much about my system - but it tells me all (almost) I want to know about the WiFi networks in my vicinity.
    • And once I'm connected to a network Flame tells me wich bonjour services are available. (Run in an airport lounge just for fun - it's amazing how many people intend to share their music and printers whith everybody. Especially considering the fact that you deliberately have to turn on sharing in OS X)
    • The built-in Mail.app does almost exactly what I want - but sometimes I just want to check my mailbox just to see if it's expedient to burn bandwith (think of UMTS/GPRS in aforeign country) by really accessing my Mailbox. Thats where MacBiff - an IMAP-savy menu extension is exactly what I need.
    • Even Apple doesn't always get it right - to really get the best compatibility with my Palm I needed to install the MissingSync. Now everything works fine.
  • Multimedia
    • VLC - any questions? Handles all the video formats out of the box.
    • But when I use Quicktime I sometimes would like to run it fullscreen. Thats very easily possible by sendig AppleScript command to the player - even without the "Pro" version. Fullsceen4Free does exactly that and includes a (nice?) GUI.
    • Before CoverFlow was integrated into iTunes i definitely liked it better. May or may not be available in the future. Last download possibility I knew was at MacUpdate
    • One of the esential tools for iTunes is of course beaTunes. This tool analyses Songs not only by BPM but also by "colour" - kind of representing the overall mood of the song - and really helps building better playlists.
And - last but not least - the system extensions I use
  • Although not really necessary, Growl is very helpfull when many application are running in the background and their user notification shall appear at least somewhat co-ordinated.
  • As you might have guessed by my use of Temperaturmonitor I really like to know whats going on inside my machine. MenuMeters provides exactly the information I want.
  • Even though the MDI-Interface is the standard for OS X applications there is no standard way to acces all the windows vie the keyboard. Witch gave me back the capability to "tab" through all open windows. (Arguably thats not really necessary once one gets used to exposé)
Oh - did I mention dashboard widgets?
But it's getting late, my battery is almost empty and I still got a lot of other work to do - so here's just a taste...

Sunday, September 03, 2006

Location Changer: Good Idea, unlucky (for me) implementation

I just gave WiLMa a try run and prompty had to reconfigure my mail system. Here is an excerpt from my mail to the author:


One of the great things in apples Mail.app (unlike the last version of T-Bird I tried or some other mail clients) is that it handles SMTP-Servers on an account basis.
I don't use an open SMTP-Gateway and I neither like nor encourage open SMTP-Gateways. Therefore each of my outgoing mail addresses (private, business, community work) has to use its own SMTP Server/Gateway.
The first time I started WiLMa and went to the 'SMTP Servers' pane I knew I was in (mild) trouble: Only one SMTP-entry...
I switched to Mail.app and guess what: All accounts already bore the same entry for their SMTP-Servers. And obviously one of them even disappeared from the list (probably because I have two account on the same machine).


At least my other system settings were left alone (sigh), and it only took about 10 Minutes to reconfige the accounts. (Testing included)


Update, less than a day later:
I just got Mail from the author - he really seems to be very responsive, so I think I'll give it another try in december.


Here's (part of) what he wrote:
[...]As for the software's initial behavior, I hadn't thought of it from that angle and will try to get this fixed in time for the October release. Perhaps an opt-in for features instead of an opt-out would be a good solution?[...]


Powered by Qumana


Wednesday, May 17, 2006

Mac OS X newbies

Perhaps this thread should be read by all Win to OS-X convertees. Although it's a little long it's worth scanning through it.

OS-X Simple Graphic Tools

One thing - apart from the cd-copy stuff - that really bugs me in OS-X is the absence of anything even remotely resembling MS-Paint. I suppose most Mac users own some Version of Photoshop or the like but I'm a consultant and developer not an artist that's why I'd like to have something lightweight, easy and free.
After some googling I found
Seashore (probably a little heavy)
Seashore
Seashore
and
Posterpaint

let's have a look at them... more to come
-mm

P.S.: Drawing could be done with
Inkscape or
ArtRage 2.1

Tuesday, April 11, 2006

Moving from windows to mac

Well, since a week or two I own a shiny new MacBook and I think it's time to wrap up my experiences so far.
Although I spent a few night trying to get the new feeling I was able to be productive almost from day one.
The build in Apps are a breeze, Mail does just what it's meant to do, the browser works like a charm and after installing the PalmDesktop my Palm synchronized (almost) perfectly with iCal and the addressbook. I only miss the categories but i'll have a look at the MissingSync (for Palm OS) and I might be completely happy again;-)

[posted with ecto]