Sunday, January 29, 2012

Managing files, and Google Chrome Bookmarks

Last year I moved over to using Google Chrome. Various mathematics and graphics sites I was directed to on twitter, required IE9, Chrome, or Firefox. Since IE9 requires an update of the operating system, that left the choice between Chrome and Firefox. Since I use various Google tools and they seemed faster in Chrome, I opted for Chrome. I also was experimenting with exporting my Personal brain file for a web site, and it also only displayed correctly in Chrome and Firefox: all the link lines between thoughts are missing in IE8. Firefox, may have been the option, since encountered a problem with Chrome when visiting ExcelCalcs, with site menus disappearing behind video images displayed on opening page: but this appears to have been fixed.

Files and Folders
Anycase, I don't like either Chrome or Firefox bookmark files. Shortcut files waste hard disk space: small file using up addressable block larger than file needs. However shortcut files can be arranged anywhere on the hard disk, to create task specific folders, or associated with project folders, totally independent of the browser. When I do internet research, I tend to accumulate hundreds of files and shortcuts. At the time of acquiring I may or may not sort the files and shortcuts into some rational collection of topics. So barring time consuming task of sorting into topics, I have VBScripts (.vbs) which are used to sort files into folders based on: year, month and file extension. Some scripts create folders locally, others move files from one folder structure into another. Once sorted into similar chronological folder structures it then becomes easy to use Beyond Compare, to eliminate duplicates. I've tried duplicate file finders they just duplicate the mess: take far too long, and then require a considerable amount of time to manually adjust: which are the originals and which are the duplicates. More over I don't actually want to delete duplicates, I do want a chronological archive, showing historical development of the various things I work on. So custom scripts became the way to go: though the scripts at the moment just creating an alternative mess: there are some situations where need to keep different file types together. So to this end been experimenting with FlexTek/Diskboss and xyplorer: with these applications I can search, exclude folders, and otherwise move files and auto-increment filenames if duplicates involved.

Archiving internet shortcuts in MS Access.
So I just got into regularly sorting files and shortcuts into : year and month, when moved over to Chrome: no shortcuts: no direct means of programmatically sorting my bookmarks: a step backwards. A big step backwards, because also have Excel/VBA and MS Access/VBA macros to read internet shortcuts, extract the information and then archive in an MS Access table. The table contains keyword fields, and other fields for grouping. Using query in MS Access I can delete duplicate shortcuts, and otherwise search and find stuff that I need. Admittedly not altogether helpful, since websites disappear or get modified, and the weblinks become invalid. However even Google returns invalid links, so it saves sometime by searching my database get the name of the webpage and then search google.

That I archive all my internet shortcuts in MS Access tables, and delete the shortcuts from my hard disk, did suggest that the Firefox bookmark manager was a possible alternative. But after a few days of messing with Firefox bookmark manager concluded it was too slow and unstable. Also experimented with Personal Brain, imported all my bookmarks: but I have no means of programmatically sorting, tagging and linking. Also cannot get used to using personal brain as a starting point and launcher. Consequently MS Access remains the centre of everything: I can build tables cataloging files, archive internet shortcuts, and otherwise link together with project data, and display data in whatever format I want using VBA.

So either way bookmarks have to be extracted from either the Chrome or Firefox bookmark.html files. At first I didn't think this was a problem: I had previously written several years back, a simple Excel/vba macro to rip url references from html files. The purpose was to extract weblinks from saved webpages, containing little more than hyperlinks. Put the hyperlinks into Access table, then delete the saved webpage and associated files. At one stage I saved webpages, but discovered couldn't back up to a CD, since CD's didn't support the long file names: nor my deep folder structures. So got rid of all the saved web pages, after extracted useful urls, and otherwise printed to pdf. Now instead of saving webpages, I just print web research to pdfactory.

Messing with the Google Chrome Bookmark file
So my existing vba routine seemingly little use. I tried opening the bookmark file in Excel directly: it turned out not to be a properly formatted html file. I also tried opening in XML Notepad and UltraEdit, but also turned out not to be a properly formatted xml file. So I started writing a limit state machine parser1, similar to techniques for parsing AutoCAD DXF files, that was early last year, and I otherwise got side tracked from. But its a new year and time to clean up last years accumulation of junk. I'm a womble I tend not to throw anything away: near gaurantee that within a few days of doing so, will end up needing.

Early last year I did do a google search, and find code for parsing html, and xml, but as indicated the bookmark file is not fully compliant with, I aslo did find various projects for parsing the bookmark file, but all were based on programming languages which support searching using regular expressions, and otherwise exporting the data to xml format. So I did another search this year and discovered that the file format is json, for which there are a variety of parsers in a multitude of programming languages: once again mostly using regular expressions and converting to xml.

Down loading code for a json parser, and then customising specifically for extracting the data from the bookmark.html file all seemed too complicated. I orginally considered writing a parser because, no guarantee on the location of data in the file, therefore better to find it by properly parsing the structured file format. However, it appears that each bookmark is written to a single line in the file, therefore I only need to chop up an individual string: I don't have to scan across multiple lines and find opening and closing tags. Therefore last night, discarded limit state parser idea, and went back to original brute force method. Read a line, if string contains an url, then chop it up and extract the various pieces of information, and store in Excel worksheet. From Excel I can then transfer to MS Access. I typically program in Excel/vba since that is where all my additional functions and macros are for string handling, and file handling: also programming in MS Access is a bit more cumbersome as is working with tables {especially creating and emptying  tables}.

Being a brute force method, the macro is likely to come unstuck at some point in the future, but at this point in time it meets my needs. Well with one exception: Google Chrome bookmarks file contains ADD_DATE, and LAST_MODIFIED fields, but does not show this information in the bookmark manager. The LAST_MODIFIED relates to folders, and I am currently ignoring the folders I have in Google, but ADD_DATE relates to the bookmark: it is an integer type date value, but is not the same as date values in Excel/vba. So current task is to find out how to convert the value into a Excel/vba compatible date value.

Google Chrome API
I did find the google chrome api, and it appears can access the bookmarks object directly, and otherwise build own bookmark manager to work in chrome. However being built around a browser, I am unsure whether it is permitted to write the data to a local file. JScript run locally, through windows scripting host can read and write files on local hard disk: this is thus something to look into at a future date.

On High Level Programming Languages and Application Languages
The first programming language I learnt was Fortran 77 at uni. At school we had filled various cards in and sent them off, the langauges used being BASIC and APL: basically these were random one lesson exercises, and of little real value. Well had one benefit I guess, if want to solve some matrix problems, then using a computers going to take a week or more to get results back. Can actually solve the problems in less time than takes to fill in the stupid cards. At uni moved away from cards, but otherwise had to fill in programming sheets, and hand into data processing. Once typed up, could then edit and modify the program as necessary to get it running. For which purpose, the VMS operating system was described in a completly mysterious and unhelpful manner, then fighting to get access to a computer. Not much fun, and little learning.

At home we went and got a Microbee computer in a book, running CPM/80 with a single 3.5" floppy drive, which we eventualy extended to 2 drives: one for data, and one for applications. I messed with the basic programming langauge included for a short time, and then we got Turbo Pascal 3 {unfortunately on 5 1/4" floppy}. When moved over to MS DOS, eventually moved up to Turbo Pascal 5, but also started programming with Turbo C. I also messed with the Lotus 123 macro langauge (and aseasy as), DBASE/Foxbase application language, WordStar indexing and formatting codes, and paradox application language, and AutoLISP, and submit and batch file programming.

I like object Pascal, but prefer the syntax of C, and especially prefer C for processing strings and crunching numbers. But I get lost in C when it comes to pointers, and find data structures unreadable and extremely cumbersome syntax. When moved over to windows I got Delphi and C++ Builder, along with Quatro Pro for windows and Paradox for Windows. When it came to objects I preferred Delphi over C++. However I ended up building more applications in QPro than Delphi. I reached a stage where I wanted to extend the functions in QPro rather than having to ensure macros would run. I accidently found a book in Office programming, whilst wandering around book store in lunch time: bought it learnt about the vba language, then went and bought office 97 system developers kit. I then started writing more Excel/vba code. At one stage the various Australian standards changed, and that meant that a program I was developing in Delphi became obsolete, along with my Excel function libraries. Since use Excel as primary calculating tool it was more important to update my Excel/vba function library than the Delphi library. The result was I then translated the Delphi program to vba . Another reason for so doing was, that I had attempted to import the Excel type library into Delphi, so that I could use COM automation from Delphi, but the type library had too many conflicting keywords. Don't need the type library, but Pascal uses different brackets for functions and arrays whilst vba doesn't. So problematic programming Excel from Delphi, since do not know whether an object is referencing an array or function.

So ended up spending as much time in the Excel vba editor (vbe), as I do using the Excel worksheets. I have visual studio 2003 and 2005, the idea was to program in VB, to save time in converting vba to Delphi for stand alone programs. But VB is a different language again, and so the vba code still requires translating and debugging.

So this has tended to put me in a quandry. Most windows scripting host (WSH) scripts I write using VBscript, though occassionally I use JScript. For websites development I originally used JavaScript, but moved over to VBScript. The prime reason being the existing libraries I have already in vba.

However the other year I got a netbook, for which I basically have no software, other than any freeware and opensource stuff that I can find. It came installed with demo of Office 2007, I didn't like the new interface, so didn't buy, and uninstalled. I have openoffice installed instead. The problem with openoffice is that is uses a different dialect of basic again, and also doesn't appear to directly support COM automation. I use Excel/vba to control other applications through COM automation: intelliCAD, designCAD, turboCAD, MultiFrame, MS Access. {By the way I discovered designCAD because the, COM object is still accessible in instant shed designer, which I was taking a look at, to see if any use. Which raises a question: how can they sell something for less than the application it is built on? Milo Mindbinder.}

So which direction to move. The software companies seem to think turning a perfectly good screwdriver into a hammer is an upgrade. I recently built a spreadsheet for a client, using Excel 2003, they saved it in, 2007/2010 I'm not sure which, but I cannot open it with the file converters. Given I have around 200, possible 1000 unique spreadsheets, I don't want to mess around converting file formats: been there done that with QPro to Excel: I still have QPro installed to check the original files occassionally. {Excel 97 could convert QPro files Excel 2003 cannot: not to mention windows classifies the files as a hazard}.

I don't see how changing the interface, can be considered as being the same product: it becomes a completely different tool. In Lotus 123, "aseasy as" and QPro for DOS, my fingers flew across the keyboard. Then QPro for windows introduced the property inspector, and the mouse became important. Little by little loosing keyboard skills and becoming dependent on a mouse. I keep trying to get them back, but using the mouse is habit forming, and difficult to break when trying to get a job done.

Anycase I spend a lot of time programming, because I expect the computer to do the work not me. Which was first backward step for windows removed the batch processing capabilities. Now there is windows scripting house (WSH), and windows can be programmed using batch (.bat), VBScript (.vbs), and JScript (.js). There are macro recorders around, but they are all generally context sensitive, and screen based: not command.

I don't really like windows, but most software I use is windows dependent. I always wanted a Unix machine, but wow, System V required a massive 20 Mbytes for a full install: the largest hard disk on a PC was only that big, so no space for data. Now there are people struggling to create small Linux installs of 40 to 100 Mbytes, though most of these include a graphical windows interface. The original core of the internet is built on Unix, and unix is built around C programming langauge (its scripting language is C like), whilst most web browsers use JavaScript which is a C like language. I'm not sure but I think openoffice supports JavaScript.

So contemplating moving over to using Ubuntu, Java and JavaScript. As it is I have Ubuntu installed on a stick for fixing Windows when it crashes. Problem with Ubuntu is compatibility with clients, but how compatible do I need to be? Most clients barely able to use a computer, but do have windows, and make use of Excel. But recent projects have identified that Excel based applications not really appropriate for their needs. I mean 20 Mbyte, Excel files are not really efficient, when could write Turbo Pascal program of around 250Kbytes or so. Plenty of people who can build Excel applications, and mostly can be built on top of current manual processes, thus retaining some transparency. But such applications can become cumbersome, and demand more computer resources than necessary, and not altogether user friendly. Not the least of which is many of them would be better built around MS Access, which is far better for storing and sorting data. Don't necessarily need MS Access, can connect to mdf files using DAO or ADO objects from with in Excel/vba or other vba programming environment. That I guess is the deciding factor COM automation and/or the .NET framework. There have been remours that vba will be removed from Office, it has already been removed from AutoCAD. That is not a problem, I already program designCAD and Multiframe external to the applications, using Excel/vba, because there is no vba editor, and no programming environment built in. The applications are just COM automation servers. Some people not happy AutoCAD lost vbe, but AutoCAD was always programmed in AutoLISP using an external text editor, so not really a major issue. As such I avoid AutoLISP and use other programming langauges to generate script files (.scr) or otherwise parse DXF files. If I loose Excel/vba, then vba no longer important, and I can move over to other langauge like C# for example. Its a matter of finding the right language to deal with objects, and avoiding problems with pointer arithmetic. Also mention that I do not like vba approach to classes, and its creation and referencing of objects can be more of an hassle than working with pointers. In a nutshell I swap programming languages to suit my needs at a given time.

The primary programming task is bashing files, translating one data format to another, simply extracting information, or automating file generation or program execution. Most number crunching I do directly in the Excel worksheet. At one time used QEdit, and then UltraEdit for manipulating files, but now more often than not I use string fomula in Excel worksheet, either to generate data input files, or otherwise modify. Even if I ultimately write a vba macro, I typically test searching strings and cut them up, using Excel cell functions: then rewrite using equivalent vba functions.

However I do use vba functions a lot, and also use vba to generate tables of data. I don't like complex cell formulae, they are difficult to read, and I don't agree with MathCAD about text book like formula, and MathCAD diverges away from mathematical notation in anycase. Spreadsheets have the advantage that it is not necessary to name variables, already working within an indexed matrix. So no problem differentiating sigma used for stress, from sigma used for standard deviation: a problem if doing statistics on stresses. Australian standards are dependent on many test conditions to select one formula from another: I find such conditional checks read better when written in vba rather than in a cell formula. Secondly whilst Excel can generate tables of data, its not very efficient for the formula is repeated in every cell. By using a vba loop, its is more likely that every cell is calculated based on the same formula and conditional checks. The problem with that is that changing a parameter doesn't automatically update the calculations. Circular references also a problem in Excel. Whilst circular references can be used to deliberately force iteration and finding a solution to a calculation sequence, Excel flags circular references as an Error: I therefore don't like that approach. Further it starts getting complicated, if want to generate a table of values: or want automatic calculation based on other values. Like recently for balustrades I wanted to calculated maximum span of the top rail based on a range of conditions. For one set of conditions I could simply rearrange the equations and calculate directly, for another I ended up with a quadratic, which I could solve in the worksheet cells and get a direct answer, but for other conditions: no simple solution. So ended up repeating all that was calculated in Excel worksheet in vba: so that I could iteratively find a solution and return a simple value to the main body of the Excel formatted report. Part of the problem was that the limiting deflection for the span is dependent on the span, larger spans permitted larger deflections. In first instance simple expression, simply substitute simple span ratio for span, into expression for deflection and rearrange, and get answer directly. However I had simplified the expression for the deflection limit, when replaced with actual, it got complicated: hence write iterative function. Before doing this however I simpply used Excels goalseek capability to find the answers: that however did not fit in with what I wanted to calculate and where I wanted it to fit in with rest of calculations.

In-House Software Development
Such situations is why adopted a policy of writing own engineering software, rather than buying commercial. I'm not going to write own version of Multiframe or AutoCAD: not yet any way. I say not yet, because we have an in-house 2D frame analysis package, written in Turbo Pascal, and currently being transferred over to Excel/vba. Also it is apparent, in writing my own programs to generate drawings that I have all the data structures necessary to write  a CAD package: just don't have the graphics, printing or an editor. In translating the 2D frame analysis to Excel/vba, have used the low cost CAD package designCAD 3D to draw the frame and moment diagrams. But it is becoming apparent that could use the shapes layer in Excel to draw. This would be good. I think I mentioned before that MuliFrame and AutoCAD are far too expensive to use as analysis and graphics engines for point-of-sale design software. Given that most suppliers with software, have software built around clumsy spreadsheets, it seems there is opportunity to build a more appropriate application around the full capabilities of Excel/vba. But would it be better to build around openoffice, noting that openoffice is available for both Windows and Linux?  It would be beneficial if openoffice doesn't keep getting automatically updated, and thus applications built around remain static. I don't want to be constantly fixing bugs caused by office updates. This what pushes in favour of higher level programming languages and building stand alone applications. The problem with stand alone applications is a lot of work is required on the user interface: traditional practice places at some 80% of the effort. By building on an existing application, development of user interface reduced: no file formats, no reading/writing file problems, no editor development, no screen display issues, and no printing issues. The developer of the primary application has resolved all those issues. Additionally even if write a stand alone program, its likely dependent on a multitude of windows resources provided by the API, so if windows updates then the program may become unstable. So whilst windows has resolved many of the hardware compatibility issues: it has largely replaced them with software problems.

The other issue is should such applications be internet aware and function over the internet, and also if they can function over the internet should they also operate on mobile phones? This pushes towards using Java and JavaScript as programming language. On the otherhand the .net framework, is supposed to be about it not mattering which language program in: source code in many different languages can be used in the one application. It is something of an alternative to the Java virtual machine. Which is another issue programs compiled using Visual Studio are not exactly stand alone programs, and still relatively large compared to other compilers, despite all the external resources dependent on. Picking the best programming language not so easy, so don't choose, just use all, as the needs determine.

As mentioned have a policy of writing in-house software, by doing so keep upto date with changes in Australian standards, don't have to wait for commercial software writers to update and release new versions. More importantly however is the freedom to format own reports, to carry out calculations in sequence desired and use the results for other purposes. Everything written becomes a building block towards working with larger and larger systems in less and less time.

On one hand engineering is about numbers rather than mathematics, on the other it is more about theory than calculations. For example engineering is not about substiting numbers in an expresssion like M=wL^2/8, it is more about the infromation such expression gives me about the physical world and the choices I can make. With commercial software plug numbers in, then get size of a beam returned. In general software typically checks compliance with codes of practice, alternatively may either return a single option or a list of several possible options in terms of selecting a suitable component part. As a result the software typically requires the use of trial and error for anything else. For example pick a component, say structural section, check if it complies, if it doesn't then pick another and check again. But how to select the first choice, and it having been rejected, which direction to go for the next choice? Its the theory which gives the direction. With the theory it may be possible to calculate directly for the problem at hand, whilst the available commercial software may only be able to get there through trial and error.

Further more these decisions typically need to be made by persons who are just aiming to get on with building things, it becomes a major delay and frustration for these people, when they have to wait on others to crunch some numbers and make design decisions {some days words look weird and foreign}. Manufacturing is moving towards making design more accessible to the population at large, so that manufacturers can make product better suited to the needs of the end-user. Manufacturing wants to get much like construction industry and have zero inventory, making to order. However, 5 years of research, design and development (RD&D) to get a product to market, doesn't fit well with this objective, nor does a need to build and test prototypes. The building industry can unleash bridges, and massive buildings onto the world as real world experiments, placing many lifes at risk based mere compliance with codes of practice, which are themselves based on inexact and incomplete scientific research. Despite civil and structural engineers, the building and construction industry is based on some relatively inexact science: passed of all to easily as they know what they are doing. For main stream buildings not altogether an issue, plenty of historical repetition, for the novel and innovative, then getting a lot closer to the limits of where the theories have been tested. The codes of practice are some what lacking in appropriate constraints, and demanding prototype testing.

Commercial Software for Non-Engineers (Structural)
Anycase in the past people just made stuff for themselves and accepted the risks of doing so. In the modern world there is a system of exchange, and an inference of supply be specialists, and consequently extremely high demands on expected performance as opposed to specified performance. Its not good enough to simply state what a product is fit for, many disclaimers have to be given with respect to what it is not fit for. Then there is also the issue of how fit is fit enough, and is it adequate? Everytime some person has an accident, the performance criteria are raised across the board. Whilst I think it is necessary to design, better and safer products, I also think individuals should take greater responsibility for their choice and manner of use of such products, not always seeking compensation. Their irresponsible behaviour places inconveniences on everybody else: eg. smoke alarms, RCD's, bicycle helmets, guards on exercise bikes to name a few. Whilst these things provide protection in the event of a problem: the problem is not rampant, and these safety devices some what magnify the real cause of the problem. For example why was a baby crawling around all over the floor sticking its fingers in everything, whilst the mother riding an exercise bike: and how did the child get so close to the exercise bike, to have fingers amputated. Do need an exercise bike just chase the kid around and keep it out of danger, teaching it about the dangers. But then the parents would have to know the dangers. Similarly houses don't just burn down, there has to be a cause, a source of fire: often someone smoking in bed, or electrical devices left on. Not everyone cuts through power cables with a power saw. As for falling off bicycles, most common injury is broken collar bone: people fall sideways: hitting head against floor not all that common. Sure it is recommended to get these things and use them. The objection is making these things mandatory, and forcing unecessary upgrades in short time frames or inappropriate times.

So allowing end-users to make parametric changes to a product, via the use of software is more involved than simply crunching numbers. The software already in the market is appalling, the number crunching may be valid but the reports output are near useless, and the software has inadequate built in restraints and error checking, plus may not be relevant what so ever to the product desired.

The main culprit so far is software used by the nail plated timber truss compaines. The software is proprietory, and only agents supplying trusses have access to the software. The reports traditionally provided inadequate information regarding input parameters and decisions made regarding the selections specified. For example no evidence that is it feasible to form a given nail plated connection. South Australia now has a ministers specification, identifying requirements for such structural software, not otherwise used by structural engineers. Part of the requirement is training in the use of the software, and appointment of person responsible.

This should also apply to retained standard calculations, for much of my work results from variations from standard calculations. Some days it seems that manufacturers of structural products, they have all forms of structure available, simply pick up which calculations are at hand and submit to council for approval. If I am to believe the manufacturers then 10% of the time there are issues. From their rejected applications, and the standards calc's I may get to see, I hazard that 10% only represents those issues caught by the council, and that much is getting approved when it should be raising questions. There is also a great much that gets built without approval.

So have software in market in need of improvement, standard calculations to be replaced by software, and owner builder stuff to be checked using DIY software. But the software really needs to be thought about and properly designed, not just launched into and written as currently appears to be the case. This launching into is the case because mostly considered to be simply Excel spreadsheets. Though also aware of some extremely expensive stand alone software which was not upto its specified task: operating at point-of-sale. Its not just a matter of structural engineering and computer programming knowledge: there are other issues to consider. In particular there can be no required training for its use, the typical person in the street should be able to use the software without any special training. It should produce reports useable to the end-user and also reports suitable for council approval. The data files should also be suitable for use in other software of greater use to certifiers. A certifier may want to check bending moments and stresses, whilst the DIY is not so interested in. Thus have alternative versions of the software.Which was the problem with truss software, the only way to check was with general purpose frame analysis software, and that takes the certifying engineer considerably longer to build a model, than it takes the timber estimator using specialist software. There are two basi requirements: 1) Specification-of-intent 2) Evidence-of-suitability. Currently available software used by non-engineers, only meets the specification requirements, the evidence-of-suitability is inadequate and reliant on an assumption or questionable certification that the software makes a valid assessment. The new ministers specification attempts to make the certification more reliant and resolve other issues.

Software is written to speed things up. Can just send code direct to CNC machine tools and produce finished product. But is such auto-produced part fit-for-function, is there any evidence of its suitability of purpose?

But its not just software used by non-engineers that should be of concern. Software used by engineers is also questionable, in its fitness-for-function, and most such software contains many long disclaimers. The computer reports generated are very close to being a mass of scrap paper: which typically no one has ever looked at. So if the designer didn't use it, what use does the designer think the certifier will get from it? More thought needs to go into the whole design and assessment, approval process to make it more quality robust. It is likley that information technology and knowledge engineering will play a great part in this process.

{well thats my divergent waffle over for today: Sat 2012-Jan-28  23:04}

  1. Dennis N Jump, 1989, AutoCAD Programming, TAB Books

  1. Original

PS: I have now set up another blog over on wordpress. The current blog here will continue with the chaotic free writing. As I get time to more formally structure, and rewrite as a more polished article I will post the modified article over on wordpress. I still won't think it is finished, but hopefully will have a more focused flow, and start new ideas in new sections, instead of a divergence from current flow.

PPS: Wasn't aware that there were actual limits on free writing, such as time limits or otherwise page limits or word limits. Such aspect of free writing is based on an assumption of difficulty finding something to write, rather than a mass of divergent ideas interfering with clear thought. I don't have a time limit, the constraint is interruption for meals, or simply the number of convergent ideas gets way beyond that which I can write about, and I just stop writing and keep on thinking: possibly talking to myself. To that end it is only partially successful at clearing my mind.

Thursday, January 26, 2012

Education or Experience?

There is an argument taking place on linked in Construction Management group:
Experience or Education. Choose One ? (and I know you want to say both, or depends on situation. Just choose one)
It is about as meaningful as:

 Which is bluer : blue or blue?
The contract says supply blue, you contracted to supply blue, so supply blue. We did supply blue! No you didn't? Yes we did? No you didn't? If its not blue, then what colour is it? I don't know but its not blue, supply blue!
And on and on its goes.

Obviously there must be a difference between education and experience, else the question is stupid, and our language otherwise plagued with surplus words. The difference between the two words is subtle and the refined meanings of which have not been clarified within the context of the question. Not clarifying is like relying on something as meaningless and inconsistent as "industry standard practice".

As I mentioned earlier there is a global debate on the purpose of education taking place as government after government cuts back on funding. Most of the educators generally of the view that education is not about schools, teachers, examinations or parchments. Education is about learning, and with increasing access to the internet, self learning is going to increase.

There is a saying that:
 wisdom comes with observation not age.

I think their focus on the importance of the internet and access to information is relatively narrow. Any child who has access to books, workshops and an appropriate and interesting environment to observe, has the potential to learn faster than the slow pace of the national curriculum. A child with access to history books, can be way ahead of the child restricted to the slow pace of information presented by a teacher on a blackboard. A child surrounded by stone arch bridges built by ancient Romans, may learn little about history, and little about construction, unless they have enquiring minds, and take an interest in going beyond what they can actually see.

It is not necessary to go to a university to learn, such is only necessary to obtain a parchment of evidence of such learning. There appears confusion in the community that people have to be taught, like the teacher does everything and the pupil doesn't do anything. If the pupil fails its the teachers fault. Teachers do not and cannot impart knowledge or competence.

The teachers role, is to assist the pupils and students to learn: the pupils and students do the learning. To me the difference between pupils and students is that pupils are asked questions and students ask the questions. Students have enquiring minds, pupils do not. So teachers first task is to turn pupil into a student. When I was at school, if I asked a teacher a question, the usual response was, you will learn that next year. My response was to go to the library and learn it straight away: I wanted to learn not follow a schedule. The teacher cannot answer the question, because the pupil is not considered to have adequate prior learning to understand the answer. But that is not a problem for the self learning student, it simply raises more questions to seek answers for: it drives further learning.

Words in a dictionary are defined with more words. Thus the meaning of a word is not clarified until the meanings of all words used to define one word are also equally defined within the context of the learners experience. A dictionary cannot really define hot or cold water, nor can it describe blue. Thus a first dictionary tends to be a picture dictionary. But the illustrations in the dictionary are still symbolic like the words, and thus symbols still have to be given some real world context: and not all words are dependent on the sense of vision. Ultimately the words start representing entirely abstract ideas, like democracy, and the dictionary cannot clarify meanings, entire libraries of books on the concept cannot clarify the meaning. Every individual has a subtle variation in their understanding: permitting greater or lesser freedoms than another. Similarly the words education and experience have subtly different meanings to each individual, and such perceptions and meanings also change with the passage of time. The word "experience", in particular is highly emotive.

The argument between education and experience could be equated to the ancient Greek argument between the theoretical and the empirical. Those in favour of theory contended the senses could be tricked and therefore not relied upon (education). Those in favour of the empirical contended that theory could be highly fanciful and bare no relationship to reality (experience). But not altogether. Increased education typically infers increased learning, and acquiring more competencies. Increased experience does not infer more competencies, nor more learning. Increased experience more typically infers more time on the job, more repetitions of the task, and increased proficiency at the task. Rarely does more experience relate to greater diversity of experiences. More experience is more likely to indicate stuck with an habitual way of doing things, and otherwise resistant to change.

Arguing education versus experience is unhelpful. The Australian Qualification Framework (AQF) is built around the concept of competencies and evidence of attainment. Whilst the most common way to obtain certification of competencies is to attend an educational institution or registered training organisation (RTO), it is possible to obtain certification by presenting evidence in recognition of prior learning (RPL). The learning is focused on the attainment of competencies: competencies that need to be certified to assist industry/society in appointing the right people to the job.

Education or Experience is a silly debate! Left or Right arm,  Left or Right leg: you can only keep one: make a choice. Which is actually the kind of attitude held by many at the contracting end of the construction industry. The characteristics required are typically hard nosed, uncompromising bully, to have won many rounds in the boxing ring. As the character Shark put it in the TV series of same name:
Find me a truth that works.
Contracting is highly adversarial, the truth doesn't altogether matter, its a question of who is going to pay for a variation: the buyer or the contractor. Each side trying to make the other responsible. Largely a matter of wearing the opposition down. The traditional bully tactics however are on the way out, have been so for many years now. Technical competence is of increasing importance, for increasingly we are dealing with established technologies: hence diminishing acceptable excuses for running over budget and over schedule. Not only can the project be planned, but the plan has been executed many times before. Consequently the bulk of the potential variations should be understood at the start: that is understood that is not a direct copy of previous. Such knowledge can be obtained by education, training and/or experience. It is largely a matter of observing and learning.

Some 100 years ago, engineering design was largely dependent on the scientific method, to investigate and develop predictive models for the behaviour of technologies. Today the predictive models have largely been developed, and the technologies established. Today's so called engineer, is largely dealing with minor parametric variations of established technologies, and why as a community we have high expectations of the performance of such systems. New technologies we expect to have been thoroughly tested before release to the environment. Though when it comes to the very large, each and everyone becomes a real world experiment, placing the community at risk.

Whilst mistakes are an important part of learning, there are some mistakes that we do not want to be made on the job. Hence as I indicated earlier in the debate, Engineers Australia classes the formal academic awards as evidence of attaining stage one competencies: the enabling competencies. It is not necessary to have such formal awards, but that is the preferred and easiest pathway. Providing evidence of attaining stage one competencies without formal studies and included examination is more difficult, and not fully catered for.

As I also pointed out, the institutions of engineers were also the original qualifying and examining bodies in the UK and Australia. But when industry starts requiring MICE, MIMechE, MIStructE or MIEAust before they will provide a job, then problems arise. For the only way to gain such membership was to have been employed in the practice of engineering. Hence there was, and is a need for some acknowledgement and evidence of enabling competencies, just to get started.

The problem is that now many see the degree as the only requirement, that is the institutions are of diminishing importance: universities are of more importance for fulfilling learned society functions. Part of that is because the stage 2 competencies are highly irrelevant to the needs of industry, society and the individual. The stage 2 competencies concern joining a profession. Also in terms of Engineers Australia the stage 2 competencies are so generic, they could apply to any one doing any job: train driver, plumber, shop assistant. Generic competencies may be beneficial if the reference to engineering discipline was removed, and additional competencies were required for such, and still further competencies for specific areas of practice. Put simply I wouldn't give a B.Eng MIEAust CPEng. NPER(struct) the job designing the structure of a singular dog kennel, let alone a multi-storey building or highway bridge. This is because I do not believe the work practice report is a reliable indicator of having achieved necessary competencies for a specific area of practice. It is far to dependent on whether the supervisor has adequate competence, or exercises adequate duty of care.

From South Australian practice where we require independent technical check, I am aware that, that one state with registration of engineers: Queensland, has far too many RPEQ's who self certify rubbish. If going to have a registration system and restrict who can and cannot practice engineering, then better have a system in place which properly assesses necessary competences. I say necessary, because the required competences are already in place and not adequate.

Neither education nor experience is developing necessary competencies. There is an additional system of training and assessment required: something far better than the engineers graduate development programme, and superior than masters degree in engineering practice. Something more akin, to military training and the way fire fighters train. Not just developing competence, but proficiency and appropriate habitual response.

There is learning simply because the world is an interesting place. Then there is learning to fulfil necessary functions within society, to provide cogs for the machinery of industrial society. The characteristics of these cogs need to be more clearly defined, and the quality of the cogs supplied significantly higher than we are currently getting. But people don't like being treated as cogs, so this has to be reconciled against peoples desire for quality of supply and desire for freedom.

We have a problem in that people do not want to pay the monetary cost of the training required to sustain the technological systems which meet their daily needs. Hence technological systems are designed to remove the need for advanced skills, and then production moved to areas of low labour costs.

A corporation is a collective, so is a city, a town and a village. A new participatory democracy is required at the local level. Education, experience and appropriate competencies for all is important to the function of democracy. Education of the ruling minority fine for a republic.

It being Australia day, no doubt there are those who will raise the republic issue, and ousting the monarchy. I have little issue with freedom from rule of a monarchy, I just oppose a republic. Rightly or wrongly, to me a republic has a ruling elite, and is not a democracy: USSR, Republic of China, etc... It also appears to me that it is the Australia government that has the more parental attitude, making this and that compulsory for all: which in many instances is just to create a market: RCD's, smoke alarms, bicycle helmets.

Population needs to get more involved: water security, food security, competition watchdog, supermarkets pushing local family business out, pressures on farmers, environmental pollution, manufacturing moving over seas, cost of formal education increasing, health care systems, aging population, housing supply, carbon tax, and energy security. No fuel to generate electricity, then pumps don't work and have no water.

Education may not provide all competencies. But it is not experience that is important it is learning. Both education and experience without learning and the development of competencies are worthless. People have to be viewing a bigger picture than simply their job, and move beyond apparent perception that employers have a responsibility to employ. They are not employers they are businesses, they do not need to employ anyone. It is people who have to convince business owners that people are better able to meet the needs of people, that people are an essential and integral part of technological systems.

I diverge. Well I diverged several paragraphs back. But hey the world is complex. To get focus I'd probably need to get a lobotomy to stop me from questioning and connecting everything.

  1. Original
PS: I know I don't have comments switched on, as explained in "About", this blog is largely about catharsis. If I wanted to continue the debate I would have posted yet another comment on the construction management forum. But there is no real debate going on, it is simply a war of attrition. Those with experience only, are not happy, since unable to get a job in current economy because they don't have a degree. Those with a degree, but not the experience, otherwise want to know how they can get experience, if they cannot get a job. Neither of the two groups have the required competencies or appropriate evidence of attained competences. Persons who do not have the competences to design work processes, or properly define jobs, are to a large extent simply insulting each other, trying to shout each other down. 

Such bickering is a distraction to my current priorities, such as required performance of aluminium balustrade. If not distracted I would have wrote about that instead. The regulations are not clear, and have lots of people running around saying this illegal and that is illegal, without really understanding the issues. One issue is clearly identifying the difference between, a wall, a partition, a barrier, guard railing, a balustrade and a hand rail: not clearly defined in the codes.

Also have one project, concerning a full height glass panel, adjacent glass balustrade. It has been certified as compliant with the glazing code by the glaziers  . Problem is the current glazing code is a design code, no longer just prescriptive, and dependent on the loading code: the design load is not mentioned on the certificate. First impression is that the glazing should be designed for crowd loading, in which case it is not compliant with the building code of Australia (BCA). But if replaced with a timber framed wall covered in plaster board, no one would probably be concerned: and yet a crowd probably more likely to knock a hole through the plaster board wall and fall to floor below.

Names of objects is important. Most South Australian pergola companies for instance do not know what  a pergola is: pergola's typically do not require development approval. The result is a significant amount of time spent assessing construction of all kinds, classified as illegal until development approval has been sought and granted. All caused largely because of misunderstanding of what the construction is, which name should be assigned?

Even so. I don't believe we should have more legislation restricting who can or cannot be in business, rather it  is necessary to further develop competencies, and learning of both suppliers and buyers. If the buyers are better informed, then they are more likely to buy from a more competent supplier: unless they want to play silly games in hopes of getting a lower price.