An Idea so good, you’ll buy yourself a beer for implementing it!

Charge it, point it, zoom it, press it,
Write it, cut it, paste it, save it,
Load it, check it, quick – rewrite it,
Plug it, play it, burn it, rip it,
Drag and drop it, zip – unzip it,
Lock it, fill it, call it, find it,
View it, code it, jam – unlock it — Daft Punk’s Technologic.

(Hair) Triggers.
If you were to ask your project manager, and a developer to define a trigger, you’d probably end up with three very different answers. Often, Triggers are a quick-fix for project mangers who know the declarative interface just won’t solve this one. Raise your hand if you’ve ever heard the phrase “just a quick trigger”? Sometimes. Sometimes, triggers are just that, a quick-fix. But if you ask a Developers, you might hear those Daft Punk lyrics chanted in monotone. “Write it, cut it, paste it, save it, Load it, check it, quick – rewrite it” Sooner, rather than later, Developers learn first hand the rabbit hole that triggers can be. After all, what *kind* of trigger is asked for? …is really needed? How will adding this trigger affect the other triggers already in place? How will existing workflow and validation rules play into the trigger? Will the trigger cause problems with future workflows?
Triggers are phenomenally powerful, but that phenomenal power comes with phenomenal (potential) complexity. Awhile back, Kevin O’Hara a MVP from LevelEleven (They make some fantastic sales gamification software for Salesforce over at: posted a framework for writing triggers that I like to call
Kevin O’hara’s framework is based on a big architectural assumptions — Namely that your trigger logic doesn’t actually belong in your trigger; instead, your trigger logic lives in a dedicated class that is invoked by your trigger. Regardless of your adoption of this framework, placing your trigger logic in a dedicated class provides valuable structure to triggers in general and makes longterm maintainability much simpler. With this assumption in mind, the framework actually changes very little about how you write actual trigger file. Here’s a generic definition of the trigger utilizing the framework.

Inside the logic class there are methods available to override from TriggerHandler that correspond to trigger execution states. i.e.: beforeInsert(). beforeUpdate(), beforeDelete(), afterInsert(), afterUpdate(), afterDelete(), and afterUndelete(). It’s inside these methods that your trigger logic actually resides. If, for Example, you wanted your ContactTrigger to apply some snark to your Contact’s Address your ContactTriggerLogic might look something like this:

So why do the extra work?
Not only does this framework help keep your code organized and clean, it also offers a couple of handy dandy, very nice(™) helpers along the way. As a trigger developer, you’ll sooner or later run into execution loops. An update fires your trigger, which updated related object B, which has trigger C which updates the original object … and we’re off. Kevin O’hara’s trigger framework has a built in trigger execution limit. Check it out:

That bit of code: setMaxLoopCount(1), means that the second invocation of a given method i.e.: afterUpdate() within the same execution context will throw an error. Much less code than dealing with, and checking the state of, a static variable. Say it with me now: Very nice!

Perhaps even more important than the max invocation count helper, is the builtin bypass API. The bypass api allows you to selectively deactivate triggers programmatically, within your trigger code. Say what? Yeah, it took me a second to wrap my head around it to. Imagine the scenario: you’ve got a trigger on object A, which updates object B. Object B has it’s own set of triggers, and one or more of those triggers may update object A. Traditionally, your option for dealing with this has been just what we did above, use a setMaxIterationCount(), or a static variable to stop the trigger from executing multiple times. But with the bypass api we have new option; any trigger that is built with this framework can be bypassed thusly:

What’s next?
I believe that trigger frameworks like this one provide quite a few benefits over free-form triggers both in terms of raw features but also in terms of code quality. Splitting the logic out of the trigger and into a dedicated class generally increases testability, readability and structure. But this framework is just starting. Imagine the possibilities! What if you could provide your Admin with a visualforce page to enable or disable trigger execution? Wouldn’t that make your admin giggle and offer you Starbucks? #starbucksDrivenDevelopment 

So you want to mix dml inserts and make callouts in your tests? Thats Cray Cray!

Here’s the low down on how to get around the “You have uncommitted changes pending please commit or rollback…” when trying to mix DML and HTTPCallouts in your test methods.

First, a little background and a health and safety warning. Sooner or later you’ll be faced with testing a method that both a: manipulates existing data, and b: calls out to a third party service for more information via HTTP.  Sadly, this is one of those situations where testing the solution is harder than the actual solution. In a testing situation, you *should* be inserting your data that your method is going to rely on. But this making a DML call — insert — will prevent any further http callouts from executing within that Apex context. Yuck. That means inserting say, an account, and then making a call out with some of that data … well that just won’t work. No Callouts after a DML call.

So lets cheat a bit. Apex gives us two tools that are helpful here. The first is the @future annotation. Using the @future annotation and methodology allows you to essentially switch apex contexts, at the cost of synchronous code execution. Because of the Apex context switch, governor limits and DML flags are reset. Our second tool is a two-fer of Test.startTest() and Test.stopTest(). (you are using Test.startTest() and Test.StopTest() right?) Among their many tricks is this gem: When you call Test.stopTest(); all @future methods are immediately executed. When combined together these two tricks give us a way to both insert new data as part of our test, then make callouts (which we’re mocking of course) to test, for example, that our callout code is properly generating payload information etc. Here’s an example:

//In a class far far away…
global static void RunMockCalloutForTest(String accountId){
     TestRestClient trc = new TestRestClient();
     id aId;
     try {
          aId = (Id) accountId;
     } catch (Exception e) {
          throw new exception(‘Failed to cast given accountId into an actual id, Send me a valid id or else.’);
     Account a = [select id, name, stuff, foo, bar from Account where id = :aId];

     //make your callout
     RestClientHTTPMocks fakeResponse = new RestClientHTTPMocks(200, ‘Success’, ‘Success’,  new Map<String,String>());
     System.AssertNotEquals(fakeResponse, null);
     Test.setMock(HttpCalloutMock.class, fakeResponse);
     System.AssertNotEquals(trc, null); //this is a lame assertion. I’m sure you can come up with something useful!
     String result = trc.get(‘’);


//In your test…
static void test_method_one() {

     //If you’re not using SmartFactory, you’re doing it way too hard. (and wrong)
     Account account = (Account)SmartFactory.createSObject(‘Account’);
     insert account;

This test works, because we can both a: switch to an asynchronous Apex context that’s not blocked from making HTTP Callouts, and b: force that asynchronous Apex context to execute at a given time with test.stopTest().