# Tuesday, 10 July 2018

I needed to convert a bunch of PDF files with pictures into image files.

I volunteered to create a slideshow for a family reunion and and one cousin sent me dozens of PDFs, which don't play nice with my editing software.

After playing around with several online converters, I settled on PDFtoPNG.

You don't need to install anything to use this tool.

Open a browser and navigate to https://pdftopng.online/

The start page shown in Fig 1 is  pretty intuitive.

P2P01-StartPage
Fig 1

Click the large green [Choose File] button to open a "File Open" dialog, as shown in Fig. 2.

P2P02-SelectPdf
Fig 2

Select a file and click [Open]. You can only convert one file at a time, but each file takes only a few seconds.

A "Converting file" message (Fig 3) displays, while the system does its thing.

P2P03-Converting
Fig 3

When the conversion process is complete, a [Download] button displays, as shown in Fig 4.

P2P04-Done
Fig 4

Click this button to save your newly-created PNG file to disc. If the PDF file contains multiple images, a ZIP file containing all the images is created.

Using this tool, I was able to quickly convert dozens of PDFs to PNG files and add them to my slideshow. It was simple, free, and I did not need to install or uninstall any software.

Photos | Tech
Tuesday, 10 July 2018 17:10:56 (GMT Daylight Time, UTC+01:00)
# Monday, 25 June 2018
Monday, 25 June 2018 09:10:00 (GMT Daylight Time, UTC+01:00)
# Saturday, 12 May 2018

Day 2 of Microsoft's annual Build conference began with a keynote presentation hosted by Corporate Vice President Joe Belfiore. This was much shorter than the Day 1 keynote and focused on Microsoft 365. The presentation was split into the following "chapters":  

  • Windows  
  • Windows Developers  
  • Office Development  
  • Microsoft Graph

For me, the most interesting topic was Adaptive Cards - a technology that allows you to add functionality to Office Applications, Microsoft Teams, or SharePoint. Organizations can create Cards that access user and group data in Microsoft Graph and share data across applications.

Many of the features discussed in the keynote are available by joining the Insiders Program and using early releases of Windows and Office. Information on the Insiders Program is here.

You can watch the full keynote below or click this link.

Below are my raw notes as I watched.

Windows
    Timeline
    Apps save data to Graph (in cloud)
    Data available across devices
   
    Shipping in-box PC app
        Data from phone available on PC
        e.g., read and reply text from PC
        Sets (available in Windows insider build)
            Office / Graph / Web working together
           
   
Windows Developers
    Fluent Design System
    Decoupling parts of Windows to make it easier to add to apps
    UWP XAML Islands
        All Windows Application can access Fluent Design System
        No threading across processes
    Some controls designed for Win10
    <WPF:WebView />
         Edge-based
        Available in WPF app
    Microsoft is using ML to improve products
        e.g., Grammar checking in Word
    Windows UI Library (WinUI)
        available via NuGet
    .NET Core 3 preview available later this year
    MSIX
        Application Containers
        Simpler deployment
    Android Emulator compatible with Hyper-V
    Notepad now supports Linux line feeds
    Change to MS Store revenue sharing
        Consumer apps: Increase dev revenue share to 85%
        95% if your campaign drives user to store (via web site or app)
   
Office Development
    Deployment
        Deploy custom functions to all users in your organization
        Deployment centralized
    Adaptive cards
        Post / update GitHub comments and issues from Outlook
        Payments from Outlook email message
        Build Your own cards at adaptivecard.io
    Customization to MS Teams
        Tab Extension
        Same as SharePoint extension
     Sample app: Click in teams to launch PowerBI report
    Build Adaptive Card for MS Teams
    MS Store has "Teams" section
        
Microsoft Graph
    Microsoft Graph Powers Microsoft 365
     Users sign into app with Microsoft Graph identity
    Get user data across apps: Provide personalized experience
    Extend Graph group or user schema: Add new properties
    Microsoft Graph UWP Controls available today
        Open Source
        https://aka.ms/windowstoolkit

Saturday, 12 May 2018 04:36:38 (GMT Daylight Time, UTC+01:00)
# Friday, 11 May 2018

Microsoft held its annual Build conference this week in Seattle. Years ago, these large developer conferences were a chance for Microsoft to reveal everything they had been working on for the past year. In recent years, the company has been much more open, allowing users and customers to see products as they develop. But that did not mean that Microsoft did not have some big announcements this year.

The Day 1 keynote was hosted by CEO Satya Nadella and CVP Scott Guthrie and focused on developer tools and cloud computing. Microsoft Azure, Microsoft 365, and the Visual Studio ecosystem took center stage.

For me, the most interesting announcements were Azure Dev Spaces, Azure Cognitive Search, Azure Databricks, Kubernetes as a Service, and DevOps Projects.

Of course, the most exciting part came 3 minutes into the keynote, when they announced that the 2 youngest Build attendees were the 10- and 12-year-old daughters of my friends Tibi and Nicoleta!

You can watch the full keynote below or click this link.

Below are the notes I took during the keynote.


3 core pillars
     Privacy
        Privacy is a Human right
     Platforms
         Azure
        Microsoft 365

Key Technologies
    Ubiquitous computing
        Distributed, event-driven, and serverless
         Azure:
            50+ regions
            70+ certifications
            Open Sourcing Azure IoT Edge
    Artificial Intelligence
        Project Kinect for Azure
            Ultra-wide field of view
   
    Multi-Device, Multi-Sense experience
         near field, far field
        mixed reality
        Microsoft 365
        Cortana
        Cortana + Alexa integration
       
Microsoft Remote Assist
    Video calling
    Image sharing
     Mixed reality
    Integration with MS Teams
Microsoft Layout
     Share and edit designs in real time
    Mixed Reality
   
Accessibility

Azure areas
    Dev Tools + DevOps
    Containers + Serverless
    Internet of Things
    Data
    Artificial Intelligence
   
Dev Tools + DevOps
    Visual Studio Live Share
         Works across PC and Mac
        Works across VS and VS Code
        Watch co-workers code, keyboard, mouse, and debugging context
        Work independently or together
        Secure connection
        Free!
    Open Source
        Microsoft is the single largest contributor to GitHub
        Launching today: Text API (Linting and Code analysis)
        App Center + GitHub integration
            Continuous integration app from within GitHub
            Testing on physical mobile devices hosted in Azure
    DevOps
        Branch from Kanban board tasks.
        Azure Portal: DevOps Projects
            CI/CD pipeline
             Integrated with VSTS project
            Deploy to App Services or Kubernetes
            Any language or platform

Containers + Serverless
    Kubernetes as a Service
        View health of each container
        Logging available
            Query, show charts
    Azure Dev Spaces
        Run and debug in a private space in Azure
    Azure Event Grid
        Serverless composition
        Workflow executed
       
Internet of Things
    IoT Edge
    VS Code project consisting of containers
    Containers running on local device
    Azure Function running on local device

Data
    Cosmos DB
        New pricing options
        Global Scale
        Multi-Master Write Support (much faster writes: trillions of reads & writes per second)
        Reduced Write Conflicts
        Graphical interface in Azure Portal: Deploy data globally

Artificial Intelligence
    Azure Cognitive Search
        Index and analyze data in Azure
    Azure Databricks
         Spark-based analytics

Friday, 11 May 2018 02:54:15 (GMT Daylight Time, UTC+01:00)
# Friday, 17 July 2015

I move around quite a bit and my laptop connects to Wi-Fi networks all over the world. Sometimes I return to those places and re-connect to the same network weeks or months later.

Once in a while, this causes a problem when a Wi-Fi network security credentials change and laptop's saved Wi-Fi settings continue to use the credentials I entered last time, without allowing me to enter the new credentials.

The simplest solution to this problem is to remove the Wi-Fi network from my laptop's list of saved networks Wi-Fi networks; then, re-add it. If it's not a hidden network, it should automatically appear when you are in range, even if it is not "saved".

But the option to remove a saved Wi-Fi network changes with each version of Windows and it may even be missing in some versions (I still can't find it in the Windows 10 preview I'm currently running).

However, you can use the command line to accomplish this. Here are the steps.

Open a command prompt as an Administrator. This is an option when you right-click the command prompt shortcut. It requires confirmation because you can wreak a lot of havoc as an administrator.

At the command prompt, type "netsh" and press ENTER to go into
network shell mode. The command prompt changes to
netsh>
as shown in Figure 1.

Forget-Fig1-netsh
Figure 1

At the netsh prompt, type "wlan show profiles" and press ENTER to display a list of all saved Wi-Fi networks, as shown in Figure 2.

Forget-Fig2-ShowProfiles
Figure 2 

Find the network you want to remove; then type "wlan delete profile name=<network name>", where network name is the network as listed in the last command. This must be surrounded by quotes. Spelling is important but capitalization is not. Press ENTER to remove this network, as shown in Figure 3.

Forget-Fig3-DeleteProfiles
Figure 3

That's it. You can close the command prompt or type "exit" and press ENTER to leave Network Shell Mode. I recommend not leaving and Administrator-level command prompt open in case you forget the power you have.

Here’s a summary of the steps:

netsh
 
wlan show profiles
 
wlan delete profile name=”<network name>
 

This method appears to work for Windows 7, 8, 8.1, and 10. Don’t get caught unable to connect to a wi-fi network again.

Friday, 17 July 2015 16:20:30 (GMT Daylight Time, UTC+01:00)
# Thursday, 28 May 2015

Sometimes, my job throws me an unexpected and pleasant curve. After spending a few months, travelling the country and teaching the fundamentals of Web Development and Cloud Development, I was asked to join my team in Redmond, where we would spend a couple days building some cool projects.

As is true with most trips I make, I arrived without a plan. Fortunately, I was assigned to a team and some of my teammates had been planning what we would build. Jennifer Marsman has been researching the Big Data capabilities within Azure for months, so she suggested that we build something that will utilize these tools. Tim Benroeck suggested an idea that would integrate social media with TV watching, so we did that.

Tim noticed that many people enjoy watching a live TV event while interacting with others over social media. But that experience is nearly impossible if you record the TV show and watch it the following day - Twitter has moved on and it's difficult to go back into the Twitter stream and find Tweets that are relevant for each point in the show (especially if you want to avoid spoilers).

So we built a system that would save to Azure storage all tweets for a set of hashtags during a given time and capture the time of the tweet, along with other relevant metadata. A user could then play back the show later and immediately start the relevant saved Twitter stream at the same point. Tweets would flow by in simulated real time, so the viewer could read social media reactions to The Bachelor's choices or to the death of someone's favorite Game of Thrones character.

The system used HDInsight STORM technology from Azure to retrieve Tweets containing a given set of Hashtags (e.g., "#GameOfThrones" and "#GoT") and push them into a Hadoop HBASE database, saving all metadata about each tweet, including the time and source. Tweets were imported in real time and in "archive" mode (we queried old tweets) using the TweetInvi API. We then allowed users to start "Playing" the tweets at a given time and displaying them in the same order and with the same delay as they were originally tweeted. Viewers could then start watching last night's show and begin the archived Twitter stream at the time the show originally aired and enjoy the social media experience along with the show.

I spent most of my time working on the user interface - a Windows 10 application built with HTML5 and WinJS. It gave me my first experience writing a Windows 10 app and my first significant experience with WinJS.

Many people from the product teams were on hand to help us.

This was a great learning experience for me personally and for the rest of my team.

We dubbed our creation "TweetDVR".  You can view the source code at  https://github.com/jennifermarsman/TweetDVR

Thursday, 28 May 2015 09:12:58 (GMT Daylight Time, UTC+01:00)
# Tuesday, 19 May 2015

Are you looking for training on software, but are on a limited budget? You are in luck. Microsoft Virtual Academy has free training on everything from Azure to Windows 10 development to Exchange Administration.

The courses are delivered by Microsoft engineers or partners - many of them leaders in the software industry. The courses I looked at ranged in length from 5 minutes to 8 hours and combine lecture, slides, and demos.

You can browse courses by topic (such as HTML5, App Development, or DevOps) or by a specific product (such as Windows Server, Microsoft Azure, or SharePoint). Of course, there is also a search box where you can enter a word or phrase in the title of the course you are seeking.

There are two ways to watch an MVA course - live or archived. Watching a course live has the advantage that you can ask questions during the broadcast. The presenters and a few others are available in a chat room to answer your questions.  Archived courses are nice because you can pause them and even download them to your PC or mobile device in the format of your choice. Want an idea of the quality of the course before you watch it? Viewers rating scores are published, along with the number of people who rated the course to give you an idea of the validity of each rating.

Each course is assigned a level (100 for content targeting Beginners up to 400 for Expert content); and each course is dated, which makes it easy to decide if it may be obsolete (an important consideration when talking about fast-moving technology like Microsoft Azure).

If you make it through a course, you can earn “points” but I have not figured out what these points are good for. I think they are kind of like the points on Whose Line Is It Anyway?

The biggest problem with Microsoft Virtual Academy is that there is so much material. There is literally more content than you can possibly watch. So the challenge becomes trying to find the courses most relevant to you. As of this writing, there are over 80 courses just on Visual Studio 2013 and nearly 70 on Microsoft Azure.

You can find these hundreds of online courses at http://www.microsoftvirtualacademy.com. Did I mention they are all free?
MVA

Tuesday, 19 May 2015 13:31:00 (GMT Daylight Time, UTC+01:00)
# Monday, 22 December 2014
PipeDreams Team
The Pipe Dreams team

I didn't know what to expect when I was invited to the Hack10 Hackathon in Miami last week. I heard we would be working with Windows 10, so I installed the preview. I heard we would be working with Visual Studio 2015, so I installed that preview. I heard we would be using Azure, which was good because I love Azure and I wanted to learn more. I heard we would be doing something with the "Internet of Things" (IoT) and I wondered what was meant by that. I heard we would be using Git, so I did a bit of reading because I am very much a Git noobie.

But I didn't know what we would be working on or what the format would be.

As it turned out, we were asked to split into teams of 4 or less and come up with an idea involving IoT, Azure, Windows 10 and Git. Many people arrived with ideas and teams already formed. I did not. I looked around and saw a team of 3 with an idea and I asked if I could join them team. It was Dave Voyles, David Crook and Jennifer Marsman and they were kind enough to let me in. I had worked with Jennifer in the past and I knew Dave and David by reputation, so I believed we had a very strong technical team.

David Crook had come to the Hackathon an idea - a hardware device to monitor temperature, pressure, light, and wind flow and report that data (along with time and location information) to a database in Azure that could be queried and displayed in a portal. The hardware device would simulate the readings inside an oil pipeline because this was a real-world problem that Crook had studied before coming to the hackathon. We called our project "Pipe Dreams".

Jennifer worked to program 2 hardware devices - one to monitor the environment and one to send collected data to Azure;
Dave Voyles created a portal to displayed the data on a map, updating each collection point and popping up a message if data fell outside an acceptable range. He completed a web front-end and started a Windows 10 client;
David Crook wrote most of the business logic and analysis, including some fairly complex formulas that he acquired from his research of the oil and gas industry.
and I created an Azure SQL database and a mobile service to write and query the data.

We shared our code in a Git repository, integrated with Visual Studio Online.

When it was done, we had data flowing end-to-end, measuring the environment and collecting data via an IoT device; stored and analyzed in Azure; and reported via a web portal.

We presented our findings to the group. I opened with a video showing a pipeline explosion (of course) and promised that our solution would solve this problem. The other team members showed off the technical aspects of the solution.

It was a competition among the dozen or so teams and first place went to... Pipe Dreams! That's right, we won!

We also had a chance to see what the other teams built, which was  a lot of fun. One of the more clever (and sadistic) ideas was a device allowed audience members to rate a speaker during a presentation and give the speaker a shock if his ratings fell too low.

Two other teams placed in this competition: A team from Brazil created a game that became more difficult as more players connected over the Internet and played against you; and Paul DeCarlo, Jared Bienz, and Sertac Ozercan created a device to play music that could be controlled via a range of other devices, including Windows 8, Windows Phone, XBOX One and the Microsoft Band.

Overall, it was an excellent weekend. I learned a lot about the technologies we worked with and I was able to partner with some really bright technologists. Microsoft had invested quite a bit into the event to help keep us Evangelists up to speed on the technical side of our jobs - we stayed in a nice hotel, ate excellent food, and there were a number of experts on hand to answer questions or help us when we got stuck.

I hope these types of events continue and I hope that I can be a part of one in the future.

Azure | Tech
Monday, 22 December 2014 00:26:48 (GMT Standard Time, UTC+00:00)
# Monday, 21 July 2014
Monday, 21 July 2014 19:02:00 (GMT Daylight Time, UTC+01:00)
# Monday, 09 September 2013
Monday, 09 September 2013 18:59:00 (GMT Daylight Time, UTC+01:00)
# Monday, 25 March 2013
Monday, 25 March 2013 14:28:00 (GMT Standard Time, UTC+00:00)
# Wednesday, 25 May 2011

I’ve spent nearly 20 years working in technology. From my university days studying Computer Engineering; through my years managing a Lan Manager® network and writing FoxPro applications; to my time consulting with companies to help them build scalable applications to solve their business problems. I work with a wide variety of software and hardware tools. I’ve become proficient with some and I’ve developed the ability to quickly get up to speed on most tools.

But am I a technologist? Is the focus of my job to use computers, software and languages? Am I paid because of my expertise in a specific technology? Do customers value my computer skills over my other skills?

I never describe my professional self as an “expert” in anything. Instead, I emphasize experience, my learning abilities, and my problem-solving skills. Occasionally, a salesperson will tout my deep, technical knowledge on a topic, but I caution them against this, because it is not my greatest strength. My greatest strengths are the abilities to understand problems, to learn almost anything, to apply knowledge appropriately to a problem, and to share with others what I have learned.

I would argue that I am not a technologist – at least not primarily. As a consultant, my primary purpose is to add value to the customer. I do this by solving business problems. Some of the tools I use to solve those problems are types of computer hardware and software. But those are not the most important tools. The most important tools I use are communication skills and reasoning ability. It may be that the solution to my customer’s problem involves very little technical changes or even none at all. If it does involve software (which is usually the case), my application of that software is far more important than the bits within it.

I’ve seen a number of consultants who are focused on their technology of choice that they don’t seek a solution outside that area. If all you know is BizTalk or SharePoint or Lotus Notes, it’s very tempting to define business problems in terms that can be associated with your favorite tool. The popular expression to define this attitude is: “If all you have is a hammer, everything looks like a nail.”

For me, the solution is the important thing. Maybe it’s an advantage that I never immersed myself in a single technology. Maybe this keeps my mind more open to alternative solutions. If I need expertise in with a particular tool, I can either learn it or find someone who knows it well.

Does this mean that there is no value in deep technical knowledge of a topic? Of course not! There is great value in learning technology. The more we know, the more we can apply that knowledge to business problems. But it is the application of the knowledge that adds the most value – not the knowledge itself.

This mind-set becomes even more important when you consider the how international the software business has become. You may be a very good C# programmer. But, if you live in America, there is likely to be a very good C# programmer in India who is willing to do the same work for much less. And if you live in India, there is probably a very good C# programmer in China who is willing to work for much less. And if you live in China, keep your eyes open, because other parts of the world are developing these skills and they are anxious to penetrate this market and are able to charge even lower rates. It’s no longer possible to compete only on price (and still make a decent living) and it’s not enough to compete only on technical skill. The ability to solve complex business problems and apply the right technology can be the differentiator that allows you to compete in a global market.

Keep this in mind as you look for solutions to problems presented by your customer or employer. Focus on adding value to the business, rather than on applying a particular set of skills.

But in the end, I think I serve my customers better because I think of myself as a problem-solver rather than as a technologist.

Wednesday, 25 May 2011 00:52:00 (GMT Daylight Time, UTC+01:00)
# Wednesday, 09 June 2010

Episode 92

At the 2010 ann arbor Day of .Net, I hosted a panel discussion in front of a live audience.

Michael Eaton, Jay Harris, Patrick Steele, Jim Holmes and Jason Follas described how they cope with the information overload of keeping up with technologies.

Wednesday, 09 June 2010 10:57:39 (GMT Daylight Time, UTC+01:00)
# Tuesday, 18 May 2010

As someone who once passed a bunch of tests (>40) to earn a bunch of Microsoft certifications(>20), I'm sometimes asked about the value of these certifications. Are they worth the time, cost and effort they take? What are the benefits? Who benefits most?

The real cost of certifications
More than the cost to sit the exam (typically $150) is the cost of studying for the exam. I used to spend weeks - at least a couple hours each day - studying for each exam. This cost tends to far outweigh the exam fee.

What do certifications prove?
A certification demonstrates a minimal level of competence in a given technology. They don't flag the holder as an expert; but, assuming you didn't cheat, they require knowledge of the subject matter in order to pass.

Everybody learns differently
I hope all of us can agree that it is not possible to succeed as a software developer, network engineer or database administrator without learning new skills every year. Each of us learns in a different way. I think most people learn a technology best when they have something to apply it to. This application serves as motivation to learn and retain knowledge. If your job doesn't provide that application, you need to create it yourself. This might be a personal or open source project or it might be a certification exam. Either way, if it helps you to learn a new skill by focusing on a tangible goal, that is a good thing.

When are certifications most valuable?
Certification is no substitute for experience, but it can help to supplement experience. This is especially true early in your career when practical experience is lacking. For those new to information technology or software development, it can be difficult to build up the experience necessary to impress a potential employer. A certification can help make up for a lack of experience, because you have demonstrated the ability to complete a goal and enough knowledge to pass an exam.

Some places require certification. Why?
Microsoft partners with companies in different ways. In some of these partnership arrangements, the partner company must have a certain percentage of their employees certified in Microsoft technology. Although far from perfect, it's a very simple way for Microsoft to vet their partners.

So is it worth it?
From a personal standpoint, I don't at all regret achieving the certifications that I did. I took most of the exams early in my career and they gained me some credibility. As recent as two years ago, potential employers asked me about my certification and were impressed when I provided it. I have learned a lot studying for these exams and that knowledge has helped my career. I doubt that I'll be taking many more exams. My free time is limited and I prefer to use more efficient ways to learn, focusing on building applications or preparing and delivering presentations.

My advice is to consider certifications early in your career to improve your skills and improve your credibility; then spend your time elsewhere as you solidify your credibility.

Tuesday, 18 May 2010 16:53:41 (GMT Daylight Time, UTC+01:00)
# Tuesday, 06 April 2010

There is a reason why computer languages are called "languages". These languages share many common characteristics with the languages that humans use to communicate.

Humans use languages like English, French, Mandarin Chinese, and Farsi to communicate with one another. Programmers use languages like Java, C# and Visual Basic to communicate with computers.

Human languages contain words and each word has one or more correct spelling and one or more meanings; Computer languages have keywords that have a single correct spelling and one or more correct meanings.

Human languages have a grammar to which writers and speakers are expected to adhere. Deviating from this grammar makes it more difficult to understand the message. Computer languages also have a grammar that we call "syntax". It is not sufficient to throw together correctly-spelled keywords: They must be structured properly. Some languages have stricter grammar rules than others, such as a requirement that we declare each variable before using it.

Writing quality software in a computer language is similar to writing a good book or article in a human language. It is possible to write a poorly-written book in English that has perfect spelling and grammar. Microsoft Word will report no errors when you press F7 when editing such an article, but that tells us nothing about the quality of the writing, which may still be confusing or boring. Similarly, it is possible to write slow, non-scalable, difficult-to-maintain software that violates no rules of spelling or syntax. This software will compile but will not perform well.

The main difference between human languages and computer languages is the precision required by each. We can communicate reasonably well in a human language, even if we use poor grammar and poor spelling. This is because we have other communication mechanisms to use, such as expression, tone, gestures and a shared context with others. Computers are generally not smart enough to understand us unless we are very specific in the words we use and in the way we structure those words. We must be more careful what we type and how we compose our words when communicating with a computer.

This is why I believe that writing software has improved my communication skills in general. By forcing me to choose carefully my words and grammar, I get in the habit of communicating with greater clarity.

Tuesday, 06 April 2010 17:53:25 (GMT Daylight Time, UTC+01:00)
# Monday, 29 March 2010

Episode 79

In this interview, Brian Genisio describes the Prism documentation and library and explains how he uses it to build applications.

Monday, 29 March 2010 12:00:16 (GMT Daylight Time, UTC+01:00)
# Tuesday, 24 November 2009

Recently, I was asked to migrate code from one source control repository to another.  The customer had been using Visual Source Safe (VSS) for many years and had dozens (maybe hundreds) of projects checked in. Most of these projects had a long history of file versions.
VSS was a decent product when it was first released, but it falls far short of newer source control systems, such as Team Foundation Server (TFS), Subversion and CVS. This customer selected TFS as their new source control system, but they did not want to lose the history they had captured in VSS.

They asked me how to move the years of VSS history into TFS. Tools exist to do this, including  Microsoft’s VSS2TeamFoundation (available at  http://msdn.microsoft.com/en-us/library/ms181247(VS.80).aspx). However, migration tools have several disadvantages:

  1. Migrating years of source control can take a really long time, maybe weeks. You will probably want to do a test migration of your data, which will extend the time requirement even further.
  2. If you have been checking code into a source control system for any length of time, there are bound to be some mistakes: Projects that were started but never went anywhere; Code changes that were mistakenly checked in and had to be reverted; and duplicate source code erroneously checked into two distinct folders. If you migrate your all source code history, these mistakes will be migrated as well.

A simpler alternative to migrating every version of every project in every folder is to simply get the latest code from the old source control repository and check it into the new repository. Using Visual Studio, this requires only a few steps:

  1. Open the project in Visual Studio
  2. Get latest from the old source control system
  3. Remove bindings to the old source control system
  4. Connect to the new source control system
  5. Check the code into the new source control system

Repeat this for each solution. You will now have a current version of all relevant code checked you’re your new source control system.

Some users will tell you this is not enough. These users want to keep all the history of every bit of code - every version, every branch and every project. Using the above migration strategy, you can still do that. My recommendation is to keep the history in your old repository, mark that repository as read-only and leave it online. Users will still be able to use this old source control system to find their old code, but will use the new source control system for all version control going forward. This is far simpler and faster than trying to push years of changes into a new repository.

The lesson here is: Always consider the simplest alternative and determine whether it meets your needs, before considering more complex solutions.

Tuesday, 24 November 2009 11:49:04 (GMT Standard Time, UTC+00:00)
# Tuesday, 22 September 2009

I count many software developers among my friends and colleagues. Many of them tell of writing code in high school or earlier; of hacking during junior high school; or of knowing their career path at an early age.

My programming career began much later in life. Because I grew up with no inkling what I wanted to become, I majored in biochemistry as an undergrad and I studied finance in graduate school. During my eight years of matriculation, I kept busy working as a laborer for a construction company, coaching a high school wrestling team, selling financial securities, interning for a commodity trading advisor and painting. After four years attending grad school at night and working two jobs, I took my MBA and went to work doing accounting and financial analysis for a printer manufacturer. I spent almost four years at this job and it rarely changed. I learned almost nothing after the first year and found myself mightily bored.

At the time, it seemed like misfortune, but I was laid off from this job when the economy turned south and my employer sold off a large subsidiary. Months of job searching during the recession of the early 1990s left me feeling discouraged about my prospects. So I took this as an opportunity to change careers. I had taken a couple programming classes before and I had done well and enjoyed them, so I enrolled at the local university to study Computer Engineering. Sometimes the curriculum was difficult. For example, every other student in my Calculus 4 class had taken the prerequisite class the semester before.  I had taken it nine years earlier.

After two semesters of straight A’s, I was prepared to pursue a degree in Computer Engineering until the phone rang between semesters. It was an old friend of the family calling. He owned a small company in Cincinnati, had heard I knew something about computers and was looking for someone to help him with his computers. I had never been to Cincinnati before, but the offer was good and he was willing to pay for my training so I accepted and moved. Six months later, my house in Michigan sold and my family joined me.

I was a novice at that time and I knew it. I worked my tail off to learn everything I could about networking and programming and computers in general. On most days, I was the first to arrive and the last to leave work. I would get up early and drive in on Saturday to work a few hours before my family woke up. I worked at that company for five years. For most of that time, I was the entire IT department. I managed a LanManager network that I converted to a Windows NT network; I ran a call center of data input operators;  I was the company’s primary computer help desk; I evaluated and bought personal computers and servers and printers; and I wrote all the company’s custom software.

Of these tasks, writing software appealed to me most. In programming, I had the ability to learn technical skills, to practice logical thinking, and to exercise my creativity. It gave me the opportunity to exercise all parts of my brain. I decided I wanted to focus most of my energy on programming.

At that time, my language of choice was FoxPro, which gave me a chance to build Windows user interfaces and to learn about relational databases. I learned about language constructs and programming algorithms and naming conventions and frameworks. I would stay up late into the night reading programming books and technical journals. I enjoyed learning about programming far more than I enjoyed accounting or finance.

When Visual FoxPro was released, I redoubled my efforts, trying to grasp the concepts of object oriented programming and deciding when to use inheritance.

After five years, I got the opportunity to join a local consulting company, where I could focus on software development and training. I would rotate between teaching classes and building business solutions. This was another great learning experience: Teaching made me a better programmer and programming made me a better teacher.

This consulting company was known for its FoxPro expertise but we did a fair amount of Visual Basic programming and I was able to learn my second language. When Microsoft released ASP and Visual InterDev, I learned that and began teaching a class in web development. I taught that class more than any other.  I learned about XML in 2000 and began applying it anywhere I could, like a hammer looking for a nail.

Unfortunately, the company I worked for made some poor business decisions and people began to leave – first the customers, then the consultants. I followed a friend to G.A. Sullivan (aka GAS), a medium-sized consulting company in Cincinnati. I was attracted to GAS because of all the talented developers they had on board already.  Where my old employer seemed to be drifting from day-to-day, the new group had plans. They managed projects with efficiency, they had in-house experts in numerous areas; and they were well-respected by their customers and by other development shops. Not only did I learn a great deal of technology (I was at GAS when I did my first .Net project) but I first began to do public technology presentations at that time. I spoke in front of customers and at the local VB user group (later reborn as CINUG).

To this day, I have not worked with a group as talented and tight as the folks at GA Sullivan. Most of us have moved on, but I remain close friends with a number of my former colleagues from those days.

After a couple years, GAS was purchased by Avanade, a large multi-national consulting company started as a joint venture between Accenture and Microsoft. With such enormous parents, Avanade was able to go after much larger customers. During my years there, I traveled a lot but I was able to work on a number of large enterprise applications, which helped me in understanding scalability, security and how to navigate the bureaucracy of a large corporate environment.

I had my first exposure to Rules Engines, Workflow Foundation, Unit Testing, and Continuous Integration on various projects for Avanade. I spent over a year focused almost exclusively on BizTalk Server, diving deep into Microsoft integration technologies.

I wrote very little code my last year at Avanade as I led a team designing an e-commerce integration project. Instead I got experience writing design specifications and developing project plans for a waterfall project.

In 2007, I left Avanade because I wanted to spend more time with my family. I took a job with Quick Solutions Inc. (QSI) because I was impressed with the smart developers I met there and I admired their passion working and speaking in the community. I got back into coding working on an ASP.Net portal project. I also had a chance to learn from some smart people about Agile development methodologies, Team Foundation Server and the database tools of Visual Studio. Being closer to home allowed me to spend time with the developer community.  For the first time in years, I began actively speaking at conferences and user groups and participating in user groups. In 2008, following a change in ownership, QSI decided to get rid of all their consultants outside of Columbus, OH. 

A year of being active in the local community made it easier to find a new job and I joined Sogeti, my current employer. While here, I’ve worked in a variety of industries and even did my first SharePoint project. I’ve kept active in the development community, in part as a way of expanding my own knowledge of technologies.

I’ve had a number of stops over the past 15 years and I’ve learned something new everywhere I’ve been. Looking back, losing my job as an accountant was a good thing for career and my life.  

Tuesday, 22 September 2009 05:06:01 (GMT Daylight Time, UTC+01:00)
# Thursday, 17 April 2008

Today, I was approached by someone with a request that sounded very simple.  She had a large Word document and she wanted to create an Excel spreadsheet in which each cell contained the name of a section in the Word document.  A hyperlink in each cell should open the Word document and navigate the user to the corresponding section.

Years ago, I did something similar using Office 97 or Office 2000, so I knew it was possible.

I opened the Word document and inserted a bookmark at the top of each section.  Inserting bookmarks in Word is pretty straightforward:

  1. Select the first line of the section
  2. From the menu/ribbon, select Insert Bookmark
  3. In the Bookmark dialog, type a name for that bookmark.

 

I became confused when I tried creating the hyperlinks in Excel.  Inserting a hyperlink in Excel hasn't changed much through the versions:

  1. Type some text in a cell
  2. Select that cell
  3. From the menu/ribbon, select Insert | Hyperlink
  4. Find and select the file to which you want to link.

This is where I became confused.  The "Insert Hyperlink" dialog contains a big button labeled "Bookmark".  Naturally I clicked this button to specify the bookmark within the Word document.  Unfortunately, clicking the button displayed an error.  According to Excel, Word documents don't support bookmarks, although my personal experience and the on-line help says that they do.

 

The secret is that you should not click the bookmark button in order to link to a bookmarked location in a Word document.  Rather, you should append the filename with the pound symbol ("#"), followed by the name of the bookmark to which you wish to link.  For example, I wanted to link to a bookmark named "Section1" in a document named "BigWordDoc.docx", so I entered "BigWordDoc.docx#Section1", as shown below.

Apparently, the "Bookmark" button is used for cells and defined names within an Excel document.

  

I'm not sure if Excel's "Insert Hyperlink" dialog has changed in the last few versions, but this strikes me as a flaw in the user interface.  The visual clues don't help me accomplish this task - they actually took me in a different direction.

Here is a working demo of an Excel spreadsheet with links to sections of a Word document: OfficeLinkDemo.zip (17.92 KB)

Thursday, 17 April 2008 21:56:20 (GMT Daylight Time, UTC+01:00)