April 07
The Big Move

Time to catch up a little on blogging… :)

Over the past year+, I've been working with Orchard CMS to help expand our service offerings and somewhat force myself to learn a new technology set... and to also prepare myself for the eventual switch over to a non-SharePoint-based blog.

That's right, after nearly a decade of hosting this site on SharePoint, I've decided to switch the underlying platform. Why? Simply put, blogging on SharePoint about SharePoint was soooo early 2000's. It was an experiment that I started way back when I was a product team member and simply wanted to an avenue to promote the very cool product that I had spent a lot of time ti help build and test. I wanted to show that SharePoint could be flexible to literally host at home on a coffee stirrer size pipeline. Over the years, this site has been sitting at home either under my desk (which at one time had a total 8 servers under it) or down in the server room (a portion of my basement nowadays). It worked great...

The other underlying story here is that over time, my self-hosting experience grew substantially as our capacity grew and it really took off once I left Microsoft. We had a variety of servers to handle some of the things what most of us consider "standard" online services nowadays. We literally ran a small hosting business to support the businesses. The biggest problem with this solution was always our ISP. Take that down and the businesses went down.

Sound like a classic case of "we organically grew without putting a lot of thought"? Well, it was. A simple little blog site experiment grew into something much larger. From a geek perspective, it was fun to put all these things together and have our own little cloud.

When we realized that we were traveling down the very same road that I often warn our customers about, we started offloading services to businesses that could handle uptime and maintenance way better than we could.

Net result, we made the decision to transition over to the "big" cloud long time ago. The blog site was the first and now it's the last element in that story line.

So how does that relate to Orchard CMS? Well, that's the new platform for the blog site.

As I mentioned back in an Oct 2013 post, I'm in no particular rush but I am drawing a line in the sand for my own scheduling purposes. The good thing is that I have some interesting performance testing stories with Orchard. Free software is nice but just keep in mind that someone has to eventually pay the online bill. In my next post, I'll show how I nearly doubled my hosting costs by innocently upgrading to the "latest and greatest". As geeks, we love to install patches and hit the Go button to put us up on the latest version. My next post will be a great reminder what we must always establish benchmarks… without them, you could literally be wasting your money.

-Maurice

August 22
SP2013 installation fails at FC73469E

Earlier today I ran across a strange installation problem and I wanted to share my findings...

 

The core scenario was about as generic as they come for a single-server farm.

Base line scenario:

Windows 8 with Hyper-V

Windows 2012 Standard guest

SQL 2012 Standard

 

The deployment plan was straightforward. Use Windows deployment services to provision out a new server, then install SQL, then install the SharePoint prerequisites, then roll out SP and provision. It should have been super fast and easy.

Well, as we all know, SharePoint always tries to makes life interesting. :)

Installation failed with a super descriptive error:

"SharePoint Server 2013 encountered an error during setup"

 

Time to dive into the logs... They pretty much told me nothing at first. The logs reported

"Error: Failed to install product:  C:\<installFolder>\global\oserver.MSI ErrorCode: 1603(0x643)"

 

Definitely a better message, but it didn't quite help either.

 

Time to rollback to the snapshot taken after the prereq's were installed. Try again. Fail.

 

The piece the really helped was the Windows error reporting information that was available after dismissing the SharePoint installation dialog.

image

 

Error code FC73469E lead me to an insightful post. Sure enough, the VM preparation steps omitted the proper CPU count.

 

Roll back to the prereq snapshot, up the CPU count, take another snapshot. Install. #FailAgain

 

Same error. Same log entries. No progress...

 

Further research showed that others had also documented this problem and some were pointing at a MSI hack left as a comment on a msdn blog post. Maybe I'm old fashioned but msi hacks ain't right.

 

Looked over the logs, checked permissions, checked the sql installation, changed install accounts, move the install media to a new location, checked the software prereqs  ... reviewed the prereq logs... lots of installs failed.

 

Then it dawned on me that all my attempts to resolve the problem started from the same base configuration - I used the prerequisiteinstaller on the image that had the wrong CPU count. I had "corrected" the CPU problem by changing the VM with the prereqs, then using that snapshot as the base.

 

The next and last test was rolling back to a clean server, updating the CPU count and *then* installing the prerequisites.

 

Bingo.

 

Net net: The prerequisiteinstaller recorded information about the wrong CPU state, which then forced the installer to choke. No msi hacks were necessary (yeah!), only a clean *proper* vm state.

 

I should have reverted to a clean state once I determined the core state had been compromised. Instead, I "cheated" by going back to a snapshot that incorporated the "faulty" prereq's ... and that in the long run cost me a lot of time.

August 08
The Boat – Moon and Stars – and some tech talk

Remember my earlier post mentioning the boat we purchased last year? Here's a little update on Moon and Stars… (and I promise there's an inkling of tech talk in this post) Moon and Stars at anchor in Oak Harbor

After a long spell in the shipyard this spring, we finally had a chance to take some photos of her with the new "branding". This is a key event because until my wife and I are ready to move aboard and gear up for the big trip, Moon and Stars will serve in our bareboat chartering company Okean Voyaging. That's the third company I also mentioned in the previous post. Hard to believe I have my hand in 3 businesses now – Okean Solutions (the official owner of this site), Aptillon, and now Okean Voyaging.

Let me first dive into the tech talk of this post… then I'll come back to the boat.

Since 2 of my companies are very clearly tech-oriented, we had an awesome opportunity to play around with several different web site platforms when creating Voyaging's site. After reviewing a bunch of different things, I ended up with using Orchard with Azure as the host. This has been a good learning and knowledge gathering exercise. In our daily SPLivelihood, your view on technology can become myoptic and I definitely needed something outside of "normal" work to kick start the learning process.

I wish I could say that I was vastly impressed by Orchard. The Aptillon team had previously examined using it for our web site (probably last summer) and I believe David enjoyed working with various prototypes he put together. Andrew and Wictor both have raved about the platform and how well it works for their blog sites. At the end of the day, we (Voyaging) have a site running on Orchard and it works ok. My frustration with the platform comes from a few different angles:

  • Orchard's documentation is good but it assumes you're a geek. More to the point, it assumes you're a dev geek. I'm probably a little spoiled by SharePoint documentation (and before anyone yells, I have lived through 12+ years of SPDocumentation) but I don't expect documentation to painfully walk me through the technology tree in order to explain how to do something. As a complete newbie, I wanted to know how to change various "simple" things. The documentation would inevitably start out with "Orchard is built on XXX, which uses YYYY and ZZZZ." Even as a dev geek, I was left struggling sometimes because I had to go look up definitions and understand entire technologies to just make a simple change. Dev documentation is not good user documentation.
  • The next thing that really left a sour flavor in my mouth was the manner that Orchard bombed in low resource situations. I literally chased a problem for 2 solid weeks where the UI was telling me it saved an item but the system completely lost it. It was as though the app decided something was wrong and it automatically, without warning or indication, rolled back the last commit. No records or transactions could be found. In all fairness, the low resource issue was distinctly related to the host service and how I had the site configured. However, app itself had really bad characteristics under those circumstances. It was probably one of the worst app behaviors I've seen in years. The only "fix" was to boost resources (in Azure parlance, we had to move up to a standard host for various operations). Once all edits were complete, we could move back to the original Shared configuration.
  • The last thing that really didn't fit well was the concept of 3rd party Modules. Open source is a dangerous arena for support because sometimes you land on gems and other times you land on mines. I quickly learned to distrust modules not written by the Orchard team. The reason was very simple – as Orchard has matured, it appears a lot of modules didn't pick up the Orchard changes from version to version. The net result is that even though you might find an interesting module in the Orchard Gallery, there is absolutely no way to tell if that module would work with your version. The only workaround to this problem was to setup various test sites to validate modules (I ended up using 1 site per module). This was a painful but necessary process because module removal is not easy or clean. If I wanted a "clean and minimal" site, I can only truly deploy the modules that I want to put into service.

Where does that leave me with Orchard? Well, it works and it does have some nice features like layers and shapes (the former is easy to grok for users, the latter is not – see comment above). Before I ran into all the problems, I was blindly on the convert-the-blog-to-Orchard bandwagon (initial tests after all were promising). As part of my overall goal to get my IT ducks in a row for the big trip, I eventually want to move this blog over to a new host and even off of SharePoint. After living with Orchard for several months and realizing the limitations and complexities of platform, I probably will continue to move the blog over to Orchard but I'm in no rush. I figure I'll wait a version or two before I attempt to move the blog over.

And where does the Netduino come into play? :) Well, if you have seen my tweets over the past few months, I've been busy playing around with Netduino microcontrollers and the .NET Micro Framework too.

I decided to use one of our Netduinos to monitor the site and provide health reports. This has worked so well that we've extended the monitoring to include not only Okean Voyaging (sitting on Azure) but also the various web properties owned and associated with Aptillon. We've been able to pull down some interesting uptime stats to help us determine whether or not the host systems are actually working as advertised. It's really nice to make decisions based on empirical evidence you've collected rather than relying on the host service who almost always tells you there are no uptime issues.

Back to the boat …

Moon and Stars has been a pleasure to own and sail. We had a chance to take her up to Penn Cove and Oak Harbor just a few weeks ago for a shakedown cruise (which is needed after being in the shop for over 2 months) and annual trip to Whidbey Island Race Week. She served as a crew quarters. I will be the first to tell you that I absolutely loved all the years I camped out on the dock during Race Week. Having a super comfortable bed, however, completely changed the game for me. :) We spent the entire week out in the anchoring field. It was a very relaxing experience and definitely was a highlight for our guests as well.

And when we took her out sailing in Penn Cove and Saratoga Passage for some checkout tests… oh my, talk about fun. In just over 12 knots of breeze on the nose, we were pointing and cruising right around 11 knots. For the SPSailors out there, you know those are some good numbers. For the non-sailors reading this, let me put it this way… the RV can run like a sports car. Our Catana likes to sail fast. :)

If you're interested in seeing some more boat pics, swing by the Voyaging site and check out the photo gallery.

One of these days, I will get the Netduino-based boat monitoring system hooked up and I will also eventually find a way to integrate SP into that solution somehow (geeky, I know but hey I'm an SPSailor).

-Maurice

President – Aptillon
President – Okean Solutions
Mechanic and IT dude – Okean Voyaging :)

April 12
Evolutions Conference: SharePoint & Netduino

Over here in the skunk works division of our company... we've been busy the past few months working on some prototypes that will allow us to integrate physical data with the usefulness of SharePoint.

We're unveiling a portion of our efforts at the SharePoint Evolutions Conference 2013 in London next week.  You've heard me say this before - this conference rocks. I love this conference for many reasons but the one reason that sticks out in my mind is the simple fact that I spend a lot time creating new content. This year was no different! The fact we got to use soldering irons to put a demo together made this the most exciting presentation build... :)

 

If you are attending the conference next week, swing by my session and check out the evolution of business data collection and management...

Title: Remote monitoring with SharePoint 2013 and making it smart!

Session: COM710 from 1500-1600 in the Rutherford room

 

This should be a fun session with demos, hardware with blinky lights, and hopefully some good discussion!

 

-Maurice

February 12
How to lose MVP status

First, the title is a tongue in cheek.

This past fall, I lost the MVP status that I had carried for the past 5 years. As everyone knows, Microsoft’s MVP status is an award bestowed to those that are actively involved in the “community”.  Every year, the team looks at what you’ve done in the past 12 months to figure out if you deserve a marketing award.

The short answer to the title was that I was busy working. Last year was an absolutely dizzying year. The net result was that I wasn’t an active blogger or speaker. I had a few posts and only 2 speaking engagements. Shame!

What was I doing? Working on SharePoint and life’s other miscellaneous projects. :)

Here’s a rough breakdown…

SharePoint – Aptillon has been moving forward in leaps and bounds. Work was good. Work kept me traveling. By the end of January last year, I had already been in Jersey City, Dallas, Ft. Lauderdale, and Honolulu. 24,000 miles in 1 month. Ok, Ft. Lauderdale was not work related (more on that later). Mileage total for the year? Roughly 160,000 miles here in the US - none of that fancy travel to Europe or Australia or Antarctica.. and that was with me putting the kibosh on travel for nearly 2 months - twice. I reached Delta’s platinum level before most folks even start buckling their seat belts. SharePoint in the cloud? Just pray you don't have a screaming kid next to you. :)

Businesses – I now own a part of 3 companies. I started out owning 2 and by year’s end, my wife and I started a new venture. Running a single company takes time and effort. Running two takes patience. Running three requires medication!

Speaking - last year was the quietest year I've had since I first entered the speaking circuit. There were a couple of occasions where I just forgot to send in ideas and applications. Worse yet, I made a huge scheduling mistake when I double booked a conference with a client engagement. We got the project out the door that week, so it was a nice offset to missing the conference in Orlando.

The Boat – Sailing has been a part of my life for quite some time. My big dream in life has been to buy a boat and go around the world. Luckily, the woman I married also shares that dream. We started looking for a boat in late 2011. We entered 2012 with some serious shopping plans (it’s why I was in Ft. Lauderdale in Jan). Well, long story short. We are now the proud owners of a beautiful Catana 472. We found her in San Diego and after a super long story involving an incompetent captain and a company that literally left us (and others) high and dry in Ensenada, she came home to Tacoma in late October.

Tacoma – For those not from the Seattle area, Tacoma is roughly 35 miles south of Seattle. I’ve been in Seattle for 17+ years. In that time, I spent all of maybe a half day in Tacoma. It’s never been a destination for me. Either you’re driving through Tacoma on your way to Portlandia or taking the long route to the Olympic Peninsula. With the boat moored in Tacoma, we’ve had to learn a new city. I have to say it’s been a lot of fun learning more about the city. It’s got some hidden gems and you can definitely see where they have been trying to revitalize the city and the waterfront.

The Boat (part 2) – With the boat moored in Tacoma, we literally took on a new primary job – boat maintenance. First, we’ve had to figure out everything there is about the boat. Have you ever bought a house or rented an apartment? How much time do you spend thinking about how things work in house? Probably next to nothing. You figure out where the light switches are located and then you move in. Boats, especially the larger they get, are complex machines. You need to know where the switches are… what is connected to them (outlets)... what they are connected to (breakers)... what are the emergency shutoff points...  what parts are needed in case something breaks... etc.. etc... now rinse and repeat for water - all three types, fuel, etc... in a nutshell, understanding what we have and identifying all the things that need to be fixed (especially after a 1300 nm voyage up the West Coast) has been daunting.

MCSM - even though I lost my MVP status, thankfully I didn't lose my Masters certification! To be honest, I am still having problems typing MCSM (Microsoft Certified Solutions Master) rather than MCM. You might have noticed my earlier post on how the certification program is changing things around. Not only was I excited to see these improvements make their way into production, but toward the end of year I also had a chance to work with the Masters certification team again.  It was fun getting back into the frame of mind of building detailed courseware. My teammates David and Matt also helped out as we put together four different modules for the Masters program. Then I had to do double duty in late December as a student for the first rotation in the updated format.

 

Somewhere in there... we had family come visit twice, we hopped on planes to visit them as few times as well. Oh yeah - I even had my tonsils taken out some time in April. That was a fun drug trip that did not involve planes at all. :P

Pretty much 2012 was a whirlwind. SharePoint 24/7/365 + a boat ... and a lot of planes.

My pledge to the long time readers of this blog ... I'll actually reserve some time this year to share more stories about SharePoint and how to make it do more for us. I might even throw a few boat stories...

Have fun!

-Maurice

January 11
Reviewing the MSFT certification overhaul

Certifications. Need it? Want it? Worth it?

Those are common questions that I hear from customers and the folks on the front line. The answer in the past was often times buried in the intricacies of perceived value but for a lot of folks it comes down to the simple process of evaluating talent. Either you’re selling talent or your trying to acquire talent. Certifications are intended to provide a measurement stick.  It’s like looking at a resume and figuring out if the applicant can even spell Sharepoint.

In the SharePoint space, though, the standard certifications have traditionally been too easy to obtain and thus the Worth portion of the equation was often times devalued due to the ease.  Microsoft has realized that they needed to put the certify back into the certification process. Today, we’ll discuss the changes that have been made to Microsoft’s certification stack. Your answers to Need, Want, Worth will might change. Hopefully, you’ll start to see the promise of the new system.

But first, let’s take a look back in time… Starting with the 2003 SharePoint certifications, and all the way through 2010, the “core” certifications for SharePoint always involved tests that centered on admin and developer topics. Then those topics were split into a “beginner” and “advanced” sections. The model was built around an arguably sound reasoning: some people are less experienced and then grow their talents. However, there was a key problem with those tests – they really didn’t measure your ability. They measured your capability to take a test.  In a nutshell, the tests didn’t validate your knowledge or experience.  Put differently - If administrators walk into a developer test, having never written a single line of code, and pass the test… is that a good developer test? 

Unfortunately, this gap between theoretical and actual validation caused a lot of problems. If it was too easy to get a certification, then folks that relied on certifications to measure experience were basically up the creek.

Knowing there was a serious problem in this space, Microsoft introduced the Master’s level certification for SharePoint in early 2009. The Master’s certification was designed to validate a candidate as having actually used the product in real-life scenarios in addition to having completed very rigorous training and testing regimen. It was designed to be tough – you had complete all the underlying SharePoint exams for both admin and dev (ok, that part was easy), then submit a resume outlining your body of work in the SharePoint space, then if selected you would have to navigate a phone interview before final acceptance into the program, take a 3 week course, and cap it all off by taking a hands-on qualifying lab and written test. The Master’s program was without a doubt the hardest certification to obtain. The Master’s program was designed to provide the market place with proven experts within SharePoint.

However, there was still one core problem… the program when looked at as a whole was imbalanced. The underlying exams were too easy and the Master’s program, being the next jump up, was too almost too deep for most people. There was no in between. We needed an in-between. It was like going from grade school to graduate school in one leap. The changed introduced by Microsoft in the last half of 2012 have been designed to update the process. We know have a defined path of increasing difficulty that is better tied to the components of the platform *and* allows candidates to grow their experience at their pace.

We now have the opportunity from grade school to high school to undergrad and beyond. First, let’s take a look at what it takes to become the Certified Solutions Master. The program requirements are available at http://www.microsoft.com/learning/en/us/mcsm-sharepoint-certification.aspx. Digging deeper we find that administrator and developer certifications are truly geared toward testing your knowledge of the technologies (see also http://www.microsoft.com/learning/en/us/certification-overview.aspx).

A quick read reveals a few key changes:

SharePoint certification is no longer focused solely on SharePoint

I love to tell folks that SharePoint is an ecosystem. If you treat as an application, you’ll fail. SharePoint has many components, all of which have different characteristics. Certification should be no different. Both the administrator and developer tracks now incorporate facets that live outside of SharePoint. This makes a ton of sense. I can’t be an administrator of a SharePoint farm without understanding the operating system, active directory, etc. and likewise, I can’t be a good developer without understanding other common development technologies and techniques that live outside of SharePoint.

Commonalities make it easier to grow and cultivate your experience

SharePoint certification now relies on tests that are validated, refined, and used by other segments of the technology stack. Think about this way, do you want SharePoint testing you on how to be server administrator or would you rather have the Windows team test you? Also, by leveraging the courseware in other technologies, a candidate has the opportunity to spend time in other areas without worrying about digging themselves into a hole.

Courseware improvements reestablish the value of certification

From the Master’s perspective, because the course is now spread out across the different segments of the platform, the SharePoint certification team can focus on teaching rather than trying to go through a laborious process involving interviews and resumes. The MCSM certification pre-requisites ensure the candidates actually fit the bill of a Master’s candidate.

Keep on learning

Digging deeper into the certification changes, you’ll also find that certifications are no longer static. This means that certifications will expire unless you go back and recertify. At first, I didn’t like this idea because it felt like a forced learning process. However, it makes sense. Technologies change and the platforms are ever evolving. The tests themselves will change. This is brilliant as complex platforms such as SharePoint will be incorporate field experience and other improvements into the tests. As SharePoint grows, the test will improve and the *next* time you have to take a SharePoint exam, the student will be able to validate new skillsets. Put differently – if you want to keep the shiny little badge of honor on your resume, you should be up to speed on the product and technology space.

It’s worth noting the developer track hasn’t been fully announced but if initial talks are any indication, you will find that the developer courses will be heavily influenced by content from existing developer courses in other areas. Again, the concept is leverage knowledge as much as possible.

In general, the retooling of Microsoft’s certification process is a welcomed change. Need it, Want it, and Worth it? As you grow with SharePoint, I think the answer takes on a vastly different outlook than it did in the past. The courseware is more extensive and tests are shaping up to be much better than in the past. We’re no longer jumping from grade school over to doctoral work – there’s a middle ground and you get to figure out how to best tailor that experience for yourself.

-Maurice

August 13
Be sure to specify a category ID for a custom SPDiagnosticsCategory

Every once and while you run into a silly bug that actually takes you a long time to figure out.  Some time ago I ran into a problem with a custom SPDiagnosticServiceBase class that only surfaced via PowerShell. 

Let’s take a closer look at the problem...

imageFirst, we start out by creating a custom diagnostic logging class that contains multiple categories.  When it’s properly deployed, we’d see custom area ("Test Area") and all of the categories in Central Admin.  Through the Diagnostic Logging page, you’d find you can change the throttle limits like any other out of box category.

Now, let’s open up a PowerShell console and try to change one of the categories using Set-SPLogLevel.

This commandlet allows you to specify the area and category using a very simple Area:Category format.  In a simpler fashion, if you specify a string with no colon, the value you provide is treated as a category.

In the next screenshot, you can see that I am trying to set the TraceSeverity to Verbose for category "dddd".  Note the error that is returned!

image

Huh?! Didn’t I just see that category in Central Admin?  How about if I try the more generic Area:Category format with a wildcard? Same result!

I shot an email off to my teammate Gary Lapointe, who knows all things PowerShell, and a group of SharePoint Masters.  No luck.  No one apparently has seen this problem. This was going to be just one of those days...

The biggest clue popped out when I tried using the most generic form of Set-SPLogLevel.  If you don’t specify an Identity parameter (for example: "Set-SPLogLevel -TraceSeverity Verbose"), Set-SPLogLevel sets the provided value to all categories.  However, for our custom diagnostic class, what do you see?

image

SP was seriously getting confused.  Three problems jumped out: Trace Severity was set to the wrong value (None instead of Verbose), Event Severity was improperly changed to "Error", and finally only the first category was updated.

To make a long debugging story short, I finally was able to isolate and fix the problem!

Core problem:

My problems all arose from how I initialized the categories themselves.  If you call the simplest constructor (as I did and most likely a lot of other folks!), your code would look like ...

image

 

The real tricky part to isolating the problem is that you need to set the log levels via PowerShell first.  If you set the log level via Central Admin, the problem is mostly hidden.

 

The fix:

Use the SPDiagnosticsCategory constructor that takes four parameters

In particular, specify a value for the category id.  The updated initialization block looks like...

 

image

In my example, the id value is coming from an enum that is used for categorization of the items written to the diagnostic service.  This could have easily been as simple as an incrementing counter (0, 1, 2, etc).

Be sure to start the value sequence at 0.  I eventually found that if you want to start your ID values at something other than 0, you’ll eventually run into the same problem.

With the SPDiagnosticCategory initialized in this fashion, setting the log levels via PowerShell will work as expected.

 

Log away!

-Maurice

October 10
Using PowerShell to create new sites based on site-scoped WebTemplates

WebTemplates are definitely a powerful new construct in our SharePoint 2010 toolbox.  WebTemplates definitely come in handy as they can be deployed as sandbox-compatible features.

Creating a site based on a web template is pretty straightforward via the UI. Basically it just shows up as another site template option. As a user creating a site, you’d never the know the difference between a farm-scoped or site-scoped WebTemplate. However, if you want to use PowerShell, you will notice that your PS scripts will take on a slightly different shape based on how the WebTemplate is scoped.

If the WebTemplate is deployed as a farm-scoped feature, then you can easily use New-SPWeb in the following manner:

new-spweb $url –template “{GUID}#WebTemplateName”

 

where GUID represents the parent feature ID.

If the WebTemplate is deployed as a site-scoped feature, then your PowerShell needs to be adjusted.  Otherwise, if you attempt to use new-spweb, you’ll get the following error message: Template is not found and is not applied.  This effectively means the cmdlet could not locate a farm-level template to apply to the new site.

For example...

image

 

There are two ways to circumvent this problem:

  1. Once the site is created, call ApplyWebTemplate
  2. Before the site created, grab a reference to the appropriate WebTemplate and provide it as a value to the SPWebCollection.Add method on the parent site.

Examples

Calling ApplyWebTemplate

$url = "http://sitecollection/site1"

$w = new-spweb $url

$w.ApplyWebTemplate("{GUID}#WebTemplateName")

 

Calling SPWebCollection.Add

$url = "http://sitecollection"

$w = get-spweb $url

$template = $w.GetAvailableWebTemplates(1033) | ? { $_.name -eq "{GUID}#WebTemplateName" }

$w.Webs.Add("site1", "sample site 1", "sample description", 1033, $template, $false, $false)

 

The difference between the two methods basically boils down to the language selection for the new site. With the simple call to ApplyWebTemplate, the new site uses the same language as the parent. By grabbing the reference to the web template beforehand, you have more control.

-Maurice

October 07
SharePoint Conference 2011 Wrap Up

This year’s SharePoint Conference was probably one of the most interesting conferences that Microsoft has hosted in the past few years.  The attendance was solid and presentations covered the spectrum from 101 fundamentals over all the way over to nitty-gritty details.

Monday morning I had a chance to present on managing the Sandboxed Code Service (SP376).  I was a little skeptical that we’d fill a room with 700 seats, but I was very pleasantly surprised to see the room fill up before we switched on the microphone.  There were a ton of good questions afterwards as well.  Thanks to everyone who attended and posted all the great messages on Twitter. If you have any questions I was unable to answer, please feel free to reach out.

The conference was also a great chance to run into folks.  I saw many old friends – some that I haven’t seen literally in years – as well as many clients and former Critical Path Training admin and dev students.  It’s always amazing to see the positive energy!

The Aptillon team also had record number of presentations at the conference.  We had 7 presentations by 6 different teammates.  As a company partner, that is definitely very cool stat however the nicer fact is that we had a chance to hang out.  Since we’re spread out all over the US and constantly on the go, it’s rare to have more than 2 people in the same room at the same time. 

There was also a record number of Microsoft Certified Masters from across the globe at the conference.  How cool is that?  I remember the days when Spence and I were the only MCMs who weren’t employed by Microsoft. :) MSL also announced a new certification – Microsoft Certified Architect.  It’s really nice to see program growing!

Great conversations all around. New projects, new ideas, confirmation of design decisions... chatting about the sandbox, helping folks get a better perspectives on PowerPivot and it’s amazing potential, the cloud, getting out of the sandbox (aka azure), Windows Phone 7, watching cloud-servicing applications such as Sharevolution hit their stride, building new partnerships... truly exciting stuff!

-Maurice

September 27
Regex for Wiki Page regions

In an earlier post, I briefly touched on the value of creating your own InsertWebPartIntoWikiPage method. Part 1 – solved.  Now comes Part 2 – where am I injecting my web part?

That’s a valid question that is easily answered whenever you’re using the SharePoint wiki page editing tools within the browser.  The tools allow users to basically pick a spot in the rendered text, click on Add Web Part, and you are done.  The tools know how to safely inject your web part without forming bad html under the hood.

From a programmatic sense, this is much harder because we don’t get the “smarts” of knowing where to insert the web part markup.  Most users that try to use InsertWebPartIntoWikiPage invariably fail the first few times because they forget to account for the fact that they are injecting web part markup into an html blob and therefore need to properly account for well-formedness (is that a word?).

The net result is that folks will create code that looks like…

InsertWebPartIntoWikiPage (file, webpart, randomValueKeepFingersCrossedNothingBreaks)

The validity of the resulting HTML becomes something like rolling the dice at the casino.  Most folks can start with an empty page and count the number of characters to the first legitimate location where web part markup can be injected.  However, once the page is created and populated, all bets are off and IWPIWP becomes more of a hindrance than an aid but it doesn’t have to be that way.

Wouldn’t it be nice to be able to readily determine where the valid injection points are located? Sure! The answer comes in the form of good old fashioned regular expressions.

If you’re like me, you’ve got a nice little regex library for most any html operation (tag stripping, tag collection, validation, etc).  Well, here’s one more that you can add to the list… a regex to safely identify the your Wiki page regions!

imageThis little regex allows you to easily determine how many regions are in your Wiki page and, more importantly, where they begin and end.  The regex groups the valid editable regions with the name “InnerHtml”.  They represent the blue-bordered regions whenever you are editing a wiki page in the browser (as shown below).

image

In it’s simplest form, you can now inject your web parts in the right places without destroying your existing HTML.  For example, in the following clip, we add a web part at the start of the second region in the page.

image

Done! Page is updated with my web part in the right place and it all just works.

The really nice thing is that since we’re only identifying the HTML within the region, we can easily use the regex to update the text as needed in the region of my choice. 

Here’s another example where we identify a region and then replace it with new text.

image

What other tricks can you come up with?  How about identifying the web parts that exist in the page?  How about placing the new web parts in mid-stream w/o breaking the existing HTML structure?  All possible with regex, especially now that we’ve clearly identified the InnerHtml of your regions! 

You got to love the power of regular expressions because with a little ingenuity you can solve each one of those problems. 

-Maurice

1 - 10Next
 
© 2004-2013 Okean Solutions
Aptillon, Inc.
Microsoft Certified Master - SharePoint 2010 & 2007
Sign In