Building a Social, Proximity-Aware E-Business Card

A Dreamforce ’14 Hack

About a year ago, I was privileged to be able to participate in one of Apple’s iOS Dev Day conferences in New York City. Tickets were emailed out as Passbook passes, and as I approached the registration desk on the day of the event, my phone magically alerted me, and pulled up the ticket pass. Later that day, Apple’s Dev Evangelism Team explained that they’d built out the ticket passes with iBeacon technology. Their registration computers were broadcasting a Bluetooth 4.0 signal, and all the ticket holders with the pass in Passbook would automatically listen for a specific Bluetooth “beacon” and notify us when we came within range of the beacon. Ever since that day, I’ve been experimenting with Passbook, Passes and Beacons.

_codefriarBefore Dreamforce this year, I decided I wanted to find a way to harness Passes and Beacons to meet as many of my twitter friends, fellow devs and the technologically curious as I could. In the end, I created a proximity-aware, “socially-viral” e-business card that, through the power of Passbook, alerted anyone when they came within beacon range of me.

A Pass Primer

The language surrounding iBeacons, Passbook and Passes is a bit befuddling, so let’s look at all the moving pieces here:

  • Passes: Passes can be one of a number of things: Loyalty Cards, Event Tickets, Bus Passes, etc. The over-arching idea is that a Pass represents access to something. Passes are, from a technical standpoint a zip file containing a signed .json file and a set of images. Importantly, a Pass is a standard!
  • Passbook: Passbook is an application included in iOS since v7.0 that is used to capture, display and store Passes. Because a Pass is a standard, there are numerous Passbook-like applications for Android and Windows Phone’s Wallet app supports them as well.
  • iBeacon: iBeacon is the Apple name for a Bluetooth 4.0 (or Bluetooth Low Energy) transmitter broadcasting 3 specific pieces of information:
    1. UUID – A 32-digit string uniquely identifying the beacon(s) used for a given purpose. There can be many beacons with the same UUID, but all beacons sharing a given UUID should be for the same purpose or from the same organization.
    2. Major value – This is an integer value used to group like beacons within a geographical area.
    3. Minor value – This is an integer value used to differentiate beacons with the same UUID / Major value.

Use Cases

The UUID/Major/Minor can be confusing, so here’s two examples of where you might have UUID/Major shared amongst several beacons.

Imagine you’re the CIO of a chain of supermarkets. You want to place beacons around your stores to advertise produce, steak, dry goods and dairy specials. Rather than assigning different UUID, Major and Minor numbers for every beacon in your stores, you can set them up so that your UUID is shared amongst all your stores, the Major # represents a single store id and the Minor # represents a particular area of your store. Set up this way, you could identify which stores are having more beacon hits than others and, if you store timestamps, extrapolate general flow-paths customers take through your store. This would allow you, on a per-store basis, design marketing and sales materials in the “highly visited” portions of your store.

On the other hand, say you’re a vendor at a large trade show with 400 other vendors struggling for the attention of the 145,000 attendees. You want to drive as much traffic to your booth as possible. Traditionally, you could accomplish this with unique, killer swag like quad-copters, skateboards and faux-pro cameras. On the other hand, you could establish a network of beacons sharing the same UUID, and Major number that act as way-points within the conference hall to help attendees find your booth. Attendees who’s phones have hit all the waypoints get the killer swag. Make it a game, a scavenger hunt to drive visitation at a collection of booths. The UUID would reference the conference, the Major # the vendor and the Minor # the waypoint or scavenger hunt step.

Regardless of the use case, there’s a singular challenge to utilizing Beacons to broadcast proximity awareness: Your end-user must have an App, or Pass installed on the device. In my case, I tweeted the pass’s installation URL prior to Dreamforce, and set the pass up to display a bar code that Passbook (though sadly not any of the android apps I tried) could scan-to-install. While a seemingly significant hurdle, almost 1,500 people installed the pass before the end of Dreamforce with just a bit of advertising. App-based distribution of proximity alerts is potentially much higher. For instance, were Salesforce to build beacon awareness into the Dreamforce app, virtually all attendees would have access.

How to build your own

To distribute the pass itself, and to provide a bit of insight as to where people were snagging the pass from, I built a simple Rails app. As I mentioned earlier, the Pass is nothing more than a JSON file, and some images that are signed and zipped. To accomplish the signing and zipping of the Pass, I used the excellent passbook gem. I’ve put the source of the Rails app up on BitBucket.

The important operational portion of the application is the app/controllers/pass_controller.rb file, which has an admittedly ugly HEREDOC containing the JSON needed for the Pass.

The JSON holds everything from my Name to the beacons object that defines which beacon(s) UUID/Major/Minor it should respond to. A single pass can define multiple beacons to respond to! If you want to clone this and make your own e-biz card, note that you’ll need to modify the beacons object with your own UUID/Major/Minor and update the images.

A few other objects of note in the JSON are the “Generic” object and the “backfields” object. These objects contain the key-value pairs for the information you want to display either on the front (generic) or back (backfields) of your pass. If you’re creating other kinds of passes these fields will be different.

This Rails app is deployable to Heroku, and is setup to geolocate the IP’s of pass installations. One interesting note, I expected a fairly even distribution across the world for pass downloads but discovered that Phone carriers tend to terminate their mobile data connections in a few select cities. Check out this map to see what I mean:

Pass_Map___E-Card_Passbook_Server

On Salesforce, Poodles and Callouts

This morning a friend asked for the low-down on Salesforce, SSLv3, Poodle and what a Callout was. She was the fourth such person to ask about this, and I decided a quick primer on internet communication might help. The following isn’t meant to be the most technically correct set of definitions, glossing over many details to provide a high-level, non-coder overview.

Computers on the internet communicate with each other using a set of protocols. You can think of a protocol as a sort of rigid dialect of a given language. In general, these protocols are described and written out as “TCP/IP” which stands in typical geek-un-original-naming-conventions: “Transmission Control Protocol / Internet Protocol.” These protocols do the bulk of the work for sending data across the wires and through the tubes. They handle the mundane communication “conversations” that might look something like this:

Computer1: “Hey, You there, out in California. Sup?”

Computer2: “Hit me with some mad data yo.”

Computer1: “Ok, here’s this ultra-important tweet @codefriar wants to post”

<data>

Computer2: “Got it. Thanks yo. Tell @codefriar 201”

In the beginning was TCP/IP and other protocols you’ll recognize. Ever seen HTTP:// ? FTP:// ? These are data protocols that define how a web page, or a file’s data is transmitted. If you’ll permit me an analogy from Taco-hell, Internet communication is not unlike a 7 layer burrito. HTTP layered on top of TCP/IP etc. Even as TCP/IP + HTTP does the vast bulk of the work, as the internet has grew up, we consumers decided sending our credit cards to vendors unencrypted was a “bad idea”(tm). In response some wicked smart, and well meaning fellows at Netscape (remember them?) developed this thing called Secure Socket Layer, or SSL. SSL is an optional layer designed to sit between TCP/IP and HTTP. A long time ago (10 years ago, no kidding) SSL was replaced with TLS, or Transport Layer Security. SSL and it’s replacement TLS function by establishing a protocol-like communication between two computers that looks something like this:

Computer1: Hi, my user asked me to talk to you, but I don’t trust the internet; because internet. So if you don’t mind, tell me who you are, and tell me what encryption schemes you speak. I’m going to start our negotiations with TLS1.2.

Computer2: Uh, due to a network glitch, old hardware, old software, or just because I’m grouchy, I’m going to offer TLS1.0.

Computer1: Ugh, stupid computer, I guess TLS1.0 will work. Now lets create a one-time encryption key for this session that only you and I will know about.

Computer2: Sure, though I think your attitude towards my “enterprise” (ie: out of date) TLS version is quite rude. Here’s my Public key, and an one-time key. <key data>

Computer1: “enterprise my ass”, I’ll accept the key.

<data>

Computer1: kthxbai

Any further communication between the two computers is then encrypted with that session specific key. This is a “Good Thing”(tm).

The important part here is that the two computers negotiate which encryption scheme to use. As you can imagine, the computers try to negotiate the highest level of encryption they both support.

Here’s where the POODLEs come in. Some very smart, well meaning encryption gurus at Google found out that computers can be fooled into negotiating to a less-secure version of encryption and that the less-secure encryption used is, well, in a word useless. POODLE is the name the Google researchers gave their exploit. In their own words POODLE results in:

…there is no reasonable workaround. This leaves us with no secure SSL 3.0 cipher suites at all: to achieve secure encryption, SSL 3.0 must be avoided entirely.

(Emphasis mine).  Poodle is dangerous precisely because the encryption methods offered by SSLv3 are weak enough that a “bad person”(tm) could listen in to communications and steal information. (jerks.)

Now, lets put some legs on this set of concepts. If you want to buy something online, your computer is going to initiate that encryption-version-detection-dance. If you’re buying from a major vendor online, say one based in the lovely land of Washington, you’ll find that their computers will not accept SSL v3.0 because that would be insecure. This is good and wonderful thing.

On the other hand, lets say you’re a company that provides a Platform for software development. As part of that platform, you allow your developers to make “callouts” to other internet based services. First, what do I mean by callout? Simply put a callout is anytime the platform initiates communication with a non-platform server. In other words, anytime you ask the platform to “call” out to another computer. As you can imagine, these callouts are SSL enabled, meaning that whenever possible communication between the platform and the external computer are encrypted. Unfortunately, this also means if the computer that is called out to negotiates the encryption down to SSLv3, well, it’s effectively unencrypted. This is a “Bad Thing”.(tm)

Now, to be even more specific, this means that:

  • If your Salesforce org communicates with any-other internet connected computer because you’ve asked it to talk to your Sharepoint server. (note: Sharepoint is just an example and I cannot speak to the myriad of complex configuration mistakes that could exist and cause a Sharepoint service to degrade to SSLv3)
  • If that computer has SSLv3 enabled
  • If the Encryption scheme negotiation is, for whatever reason, forced to degrade to SSLV3

Then, your communication is effectively unencrypted. If an attacker were sufficiently motivated they can get at your data.

Here’s the nasty catch: If either side has disabled SSLv3, and the encryption negotiation cannot settle on a version of TLS, the entire call will fail, because not making the call is preferable to making a call that everyone can read… This means if your Sharepoint server’s admin has disabled SSLv3, but for whatever reason Salesforce cannot negotiate TLS1.2 with your Sharepoint server, the communication will stop, and the callout will fail because no suitable encryption scheme can be negotiated. This means updates to Sharepoint may start failing, for instance.

In a perfect world, all computers would be upgraded in such a way that prevented SSLv3 from being used. Importantly, if only one side of the communication prohibits SSLv3 and the two computers are able to negotiate a higher level of encryption this isn’t an issue. If you own the server(s) being called out to, you can work to ensure you properly accept TLS1.2.

Or you can wait until Salesforce stops allowing SSLv3 on their end… On 12/20/2014

Either way, SSLv3 should be disabled!

Eval() in Apex. Secure dynamic code evaluation on the Salesforce1 platform.

What is eval()?

Eval is a common method in programming languages that allows the developer to do some Metaprogramming. I’m sure that answer actually raised more questions than it answered, so let’s take a step back and talk about how computers interpret our code.

Whether at compile or runtime, the programming language itself is responsible for translating human readable code into something the computer can do. What differs amongst languages is the grammar the human readable code takes.

Some languages are “highly dynamic” while others are … well, less dynamic. The how’s and what’s of defining “dynamic” are both a controversy in its own right and far beyond the pay grade of this blog post, so let me just speak about one of the banner features of dynamic languages: Metaprogramming.

Remember Inception? Like Inception, Metaprogramming is a bit of a mind bender, but the essence of Metaprogramming is that instead of writing code to solve one problem, developers instead write code that solves many problems; or, as I like to think of it – developers write code that writes code on the fly.

The idea behind Eval() is to have the compiler or interpreter of the language take a string of text and read and interpret it as if it were actually code. If you’re not a coder, you may still be waiting for the punch line; what makes this all very important is that, as coders, we can create that string programmatically, mixing in variables for class names, values, etc. This allows for highly dynamic software that, in effect is capable of writing itself.

Why Eval()?

On the Salesforce1 platform, we essentially have two programming languages available to us: Apex, and Javascript. Javascript is considered a dynamic language; Apex not so much. This is demonstrated by the fact that Javascript provides an Eval() method where as Apex, on the other hand, does not. Additionally, Javascript is only available within the browser – so we cannot utilize it’s eval() method for Apex based API integrations. So why create an Apex Eval() method? Well the idea hit me when I was trying to find a way to parse JEXL expression strings in Apex.

variable1 eq '1' or AwsomeVar eq '1' or AwesomeSauce eq '1' or BowTiesAreCool eq '1' or theDoctor eq '1'

JEXL, which you can see in all its glory above, is basically a programming language unto itself. I would receive these JEXL statements from an API and I needed to evaluate the expressions for true or false. I knew I could pretty easily build a map of JEXL variable names to Apex variable names, and likewise replace the operands like eq into something like this:

variable1__c == true || AwsomeVar__c eq == true || AwesomeSauce__c == true || BowTiesAreCool__c == true || theDoctor__c == true

Wrap that in an IF() statement and we’re off to the races. Here is where Eval() comes in handy. With Eval() I can pass in that translated string, and evaluate it within an if statement. Using Eval() like this means that whenever the integrated API changes a validation JEXL string, my integration can automatically reflect that validation change.

How Eval()?

So how do we create an Eval() method? Salesforce provides us with a REST based Tooling API that exposes the Execute Anonymous method. Utilizing the tooling API’s rest access to (securely) call Execute Anonymous allows us to pass a string of code in, and have it evaluated as if we were using the developer console’s Execute Anonymous window. Note, this means there are two requirements for Apex Eval() to work: API access (Sorry PE), and setting up Remote Site in your org that allows you to call out to your instance of Salesforce. I.e. na4.salesforce.com or cs3.salesforce.com. Once you’ve met those two requirements, we’ll utilize the excellent apex-toolingapi library for calling the tooling api. Because Apex is a typed language, our Eval methods will need to return a specific type. In my original use case, I wanted to know the Eval’d result of a Boolean expression. To do so, I created the Dynamic class, with the following method:

[gist https://gist.github.com/noeticpenguin/cd457c5b969b48b1f28a]

I’m using an exception so that I can capture and return typed data from the exec anonymous call. This allows us to catch only a particular type of exception, in this case IntentionalException for success use cases, while still retaining the ability for our anonymous executed code to throw a different kind of exception if needed. I’ll leave it as an exercise for the reader to build out other types of Eval methods.

So there you have it Eval(), a.k.a. Execute Anonymous, within a typed generally non-dynamic language. Please use this for good, and remember you will incur rest api call cost when using this.

The Bystander Challenge. (No Ice Required.)

Recently I was introduced to an interesting TED talk, by Jackson Katz. In his talk (which you can find here) he makes quite a few valid, and interesting points. But, for me, the most interesting thing he talks about is what I’m going to call the Bystander Protocol. Katz says that:

A bystander is defined as anybody who is not a perpetrator or a victim in a given situation, so in other words friends, teammates, colleagues, coworkers, family members, those of us who are not directly involved in a dyad of abuse, but we are embedded in social, family, work, school, and other peer culture relationships with people who might be in that situation.

Katz is specifically speaking about abuse in his talk. I think too often we hear or read “abuse” and understand it to mean physical, sexual, verbal or psychological abuse. While those forms of abuse must be addressed, they are blessedly not the most common forms of abuse. I don’t mean to downplay, in any way, these forms of abuse. Indeed, I think there’s a more pervasive form of abuse that is particularly prevalent in the technology sector. By ignorance or malice (honestly, I don’t care which) I believe we as a society tend to use language –metaphors, words and idioms– that cull our imaginations and those of our listeners and readers. Sexist language is, I believe, especially prevalent in the technology sector.

I’m sure we can all easily find examples of overt sexism in the technology sector. Earlier this year, this happened:

Sadly this slide praises only the physical attributes of the metaphor (looks beautiful) and denigrates the personality and intellectual attributes. Thankfully, within a few short hours there was a prompt and complete apology. But, as one commentator pointed out, the fact that no one thought to talk the speaker out of this metaphor belies the underlying problem: No one caught it ahead of time because we’re not self-aware of the issues enough to catch them.

More than overt sexism in language, I feel like we use gendered pronouns and gendered examples in our talks, blog posts and even example code. I imagine it’s hard to hear “Women in Technology YAY!” from corporations, and read “Your developer can do X if he chooses.” At the very least it’s inconsiderate. Again, I doubt many people regardless of gender intentionally choose to be exclusive with their pronouns and language; but I do think it’s pervasive.

As we approach Dreamforce ’14 I’m reminded of our industry’s history with sexism and struck by the simplicity of Katz’s action point:

What do we do? How do we speak up? How do we challenge our friends? How do we support our friends? But how do we not remain silent in the face of abuse?

(Emphasis mine). I think the answer lies in the Bystander Protocol. As Bystanders, we’re present and able to speak truth to power gently and positively. I believe we, as Dreamforce Attendees, can and should expect our speakers (myself included) to not only avoid overt sexism, but exclusive language in general. I don’t imagine this should work in an aggressive, confrontational manner. When presented with gender-specific speech, or even language that presumes gender norms, we can (and should) politely, calmly ask the speaker to consider other language.

I believe we should pledge to actively participate in conversations as Bystanders; using neither sexist and exclusive language nor permitting such speech to go unchallenged. Let’s actively strive towards a culture of accountability and acceptance by doing something. None of us could hope to change the whole of the tech sector’s misogynistic culture by ourselves. No one can do it alone, but we can’t stand by in silence. As Bystanders at the world’s largest cloud computing conference we have the opportunity and responsibility to do that something by speaking out whenever we find hateful or even careless speech.

In the end, what will hurt the most is not the words of our enemies but the silence of our friends. ~ Martin Luther King Jr.

I want to challenge you my fellow speakers and attendees to Dreamforce ’14 to Pledge to do just that. Tweet with the hashtag #df14Bystander to take the pledge to speak out when needed, to politely ask questions of leaders and speakers who use exclusive language, to report overt sexist language, and to avoid the use of such speech yourself. Use “developers”, “devs”, “admins”, “we”, or “they” instead of “him” or “her” in your talks. Lets make this the tech conference where Women in Technology isn’t about the latest sexist faux-paux, but how women are presumed equal and capable. Wouldn’t that be a news blurb for @Salesforce to press release?

On Hereos and Suicide

What’s wrong with death sir? What are we so mortally afraid of? Why can’t we treat death with a certain amount of humanity and dignity, and decency, and God forbid, maybe even humor. Death is not the enemy gentlemen. If we’re going to fight a disease, let’s fight one of the most terrible diseases of all, indifference.

~ Robin Williams, as Patch Adams, Patch Adams.

Last night I learned Robin Williams died. As of right now, everything indicates he took his own life. A colleague tweeted that, whenever he learns of a Celebrity’s death he admired he stops and asks “So, where do we go from here?” I won’t presume to speak for what Robin Williams would or would not have wanted his death to mean, but I think this is an excellent time to pause, and consider what Robin Williams chose to teach us about the world.

At the beginning of Patch Adams, Williams’ portrayal of a depressed man, turned physician begins with a few words, not from the historical Patch Adams, but from Dante’s epic tale of decent into hell:

In the middle of the journey of my life, I found myself in a dark wood, for I had lost the right path.

~Dante Alighieri, Inferno (Canto 1, 1-3)

The movie goes on to show us how Patch found the right path, though arguably not before treking though the underworld. Importantly, and perhaps most poignantly, Williams’ portrayal of Patch teaches us two key lessons.

  1. Though the right path is lost, it can be regained. This has always been hopeful news for me. As a friend of mine once told me, you have to have hope to get up in the morning. Hope, however fleeting, must not be forgotten. The right path can, and will be found. Some may find this ironic given the circumstances of Williams’ death. Williams’ may have taken his own life, but until that fatal decision was enacted there was always hope.
  2. Hope comes in many forms and in the weirdest of places. Humor, as Patch taught us, can be found even in the most hopless of situations. Asking the catatonic man whose arm is forever pointed up where Heaven is makes light of a condition many would find hopless, and in so doing lightens the mood, lifted the spirits and brought hope to the others in his group theapy session. Hope that their condition wasn’t nearly as easy to make fun of.

We don’t read and write poetry because it’s cute. We read and write poetry because we are members of the human race. And the human race is filled with passion. And medicine, law, business, engineering, these are noble pursuits and necessary to sustain life. But poetry, beauty, romance, love, these are what we stay alive for… That you are here – that life exists, and identity; that the powerful play goes on and you may contribute a verse. That the powerful play goes on and you may contribute a verse. What will your verse be?

Robin Williams, as John Keating, Dead Poets Society.

This morning, I heard a demagouge run their mouth on Williams’ apparent suicide, characiterizing it as a deeply selfish act to be condemned. I heard another person say he lost the fight to Depression. I find it hard to be charitable to either of these statements. Depression isn’t a battle to be won or lost, but a disease to be treated. A really shitty disease we’re all susceptible to. One we’ve all faced to some degree or another. Additionally, to call this a deeply selfish act is, in my opinion to wash ones hands from the responsibilities we have to our friends and family with this disease. Williams is often quoted as saying:

I used to think the worst thing in life was to end up all alone, it’s not. The worst thing in life is to end up with people that make you feel alone.

~Robin Williams

I am not saying that those around him made him feel alone. Far be it from me to presume such a thing. I am, however, saying that when we find friends and family struggling with Depression, we –unconciously– treat them in ways that often feel isolating, and judgemental. Ever told someone to “just cheer up?” Ever been told to “Just cheer up?” Intentions don’t match up with what’s heard. We mean well, but we end up marginalizing or deligitimizing their struggles, or worse, leaving them feeling like they’re not understood. Alone.

I’m writing this down, not just out of regret and loss for a man who has influenced my life in a mryiad of subtle ways, but also because Depression is one of those things where the casualties are more than just friends and family to suicide, but also our hearts and souls. No one wants to get the call that someone we love has committed suicide. No one wants to relentlessly interrogate every phrase and action of every interaction they had with that loved one.

If you’re reading this, there’s a strong chance you work in the high-tech industry. There’s a good chance you’ve known coworkers or friend with depression. There are simple things we can do to help. To show hope, to refuse their urge to isolate, and our urge to allow it. To walk with them through hell and back. I’m not therapist, and I don’t want anyone to confuse this advice as “professional advice” but here’s what I think we can do, for each other to help:

  1. Stop. We lead busy lives, often artifically busy lives. One of the most powerful things we can do for anyone is just stop, and spend time with them. Coffee. Dinner. A walk after lunch. Time well spent. As friends we have many responsibilities, but chief amongst them is always to provide truth and prospective to our friends.
  2. Listen. Listen to understand, but more importantly, to show understanding. This isn’t listening while driving, or listening while writing an email. I mean actively listening. Ask questions. Some struggle not make sense? Ask a clarifying question.
  3. Validate. This isn’t to say you should tell them they’re 100% right in feeling a given way about a given situation. What I mean here, is remind them that their struggles aren’t unique to them. Are they having relationship probles? “You know X, that was really shitty of Y”.
  4. Question. Help question assumptions. Here in lies the hope. So much of our lives is spent communicating; how much of that communication seeks to fix miscommunication? Often the assumptions we make about the world arround us are founded on miscommunication. Having friends who question those assumptions helps us find hope in what otherwise might seem a hopeless situation.
  5. Encourage them to seek professional help. Don’t stigmatize it, and don’t let others stigmatize it either. Never forget, that if you feel your friend is in danger, that the better part of vallor, the better part of humanity is to risk a friendship by reporting them to professionals, than to risk a friend.
  6. Write this number down on a card, and put it in your wallet for emergencies: National Suicide Prevention Hotline: 1-800-273-8255

All of life is a coming home. Salesmen, secretaries, coal miners, beekeepers, sword swallowers, all of us. All the restless hearts of the world, all trying to find a way home. It’s hard to describe what I felt like then. Picture yourself walking for days in the driving snow; you don’t even know you’re walking in circles. The heaviness of your legs in the drifts, your shouts disappearing into the wind. How small you can feel, and how far away home can be. Home. The dictionary defines it as both a place of origin and a goal or destination. And the storm? The storm was all in my mind. Or as the poet Dante put it: In the middle of the journey of my life, I found myself in a dark wood, for I had lost the right path. Eventually I would find the right path, but in the most unlikely place.

~ Robin Williams, as Patch Adams in Patch Adams.

Does your company value you enough to send you to Dreamforce?

Look. Hear’s the deal. If your company won’t send you to Dreamforce, it’s time to give serious thought to finding one who will. Dreamforce happens just once a year, and it’s four days packed full of information. More than sessions, mini-hacks and several hundred pounds of new books, Dreamforce is your chance to cross pollinate ideas with other devlopers and admins. The single greatest reason you need to attend Dreamforce isn’t to see Reid loose his voice in the iOT lab, but rather to see new and innovative ideas and solutions to problems. Problems you may be struggling with, problems you don’t yet even have — but will. Simply put, Dreamforce is the only event in the world where 100k people get together to cross polinate ideas. You and I won’t be the smartest, most experienced people at Dreamforce this year, but we’re not the least experienced people their either. We go to learn, and to teach equally. So if your company won’t send you to Dreamforce, find one that will, and make sure that if Dreamforce 14 isn’t in the cards, Dreamforce 15 is.

#Thats fine Kevin, but how do I do that?
The very best part of Salesforce is the rich community that’s grown up arround it. Better still, these are the people who know who’s hiring, who know whether or not Dreamforce is a regular thing. [Find your usergroup](https://success.salesforce.com/userGroups), [ask on the Dev community](https://developer.salesforce.com/forums/?feedtype=RECENT&dc=Jobs_Board&criteria=UNANSWERED&#!/feedtype=RECENT&criteria=UNANSWERED), get the UG leaders to have a “we’re hiring” sheet, or weekly post the “who’s hiring” to the success community for your UG. The community knows the power of Dreamforce, and they can help you find a company that will value *you* enough to send you. Because the truth is, if your company won’t send you to Dreamforce, they undervalue you, and the work you do. Ask a UG leader, look on the success community, find a better opportunity. Never have we Salesforce Admins and Developers been more in demand than now. [We are the kingmakers](http://thenewkingmakers.com/) of business, helping realize process, facilitate communication and increasing ROI. Dreamforce only hones those skills as we jostle, litterally, from one place to another learning and teaching each other.

The Good, The Bad and the Ugly of Summer ’14 for Developers.

It’s that time of year again. All the good developers and all the good admins eagerly awaiting the end of planned maintenance and the new gifts, er, features, that Salesforce is providing. At 340 pages, the Release notes are a great balance of detail-without-being-boring and I highly encourage everyone to read through them. If, however, you don’t happen to have an adorable screaming infant providing you with extra reading time between 2-4am have no fear; I written up a few highlights. I don’t want to let all the cats out of the bag, but suffice it to say, there’s The Good, The Bad and The Ugly. Without further ado:

The good.

  1. Our leader here, is innoculously described as “Speed Up Queries with the Query Plan Tool” (see Page 241-ff.) In essence, this is the Salesforce equivelent of MySql’s EXPLAIN, PostgreSQL’s EXPLAIN ANALYZE or Oracle’s EXPLAIN PLAN functionality. If you’ve never had the pleasure of arguing with a relational database query written by the intern… well, you may not know about explain. In general these tools all work the same way – prepend any given query with the keyword(s) EXPLAIN and the database will return information about how it will gather the information your looking for instead of the actual query results. Here’s why you need this: You and I both put our pants on one leg at a time, but I’ve writen queries against objects with more than 30 Million records, and I say all our SOQL queries should be reviewed with this explain tool. With this tool we can see which, if any indexes the query optimizer is able to utilize. Here’s how SOQL’s explain works:
{
  "plans" : [ {
    "cardinality" : 2843473,
    "fields" : [ ],
    "leadingOperationType" : "TableScan",
    "relativeCost" : 1.7425881237364873,
    "sobjectCardinality" : 25849751,
    "sobjectType" : "Awesome_Sauce__c"
  } ]
}

As they say in the hood, “that there query sucks”. See that “LeadingOperationType” key in the JSON results? TableScan means it has to scan every record. ow. I should really refactor that query so that explain identifies fields it can index off of. With Sumemr’14 there’s a spiffy dev console button to access this information. Wicked.

Other good highlights include:

  1. The ability to override remote object methods
  2. Pricebook Entries in tests. Without “SeeAllData=true”, aka “DISASTERHERE=true”
  3. Un-restricted describes. If you build dyamic UI’s this is indespensible!

The Bad.

  1. There’s an aside on page 191 that bodes ill for many of us. If you’ve ever put Javascript in a home page component, start heeding their warning now. After Summer ’15 no more JS in homepage components. Convert to the new Visualforce component, or suffer the wrath of progress.

The Ugly.

Ok, I can’t really blame Salesforce for this, but the simple fact of the matter is that not all Salesforce devs are created equal. As a Salesforce consultant and developer I have inherited a number of orgs plagued with test classes that execute code, but make no assertions.

As a developer, I understand the importants of testing code, and believe that we should always write useful tests. Additionally, I know Salesforce runs the unit tests in our orgs before every release. Without assertions, however, these test runs tell us only that the code runs, not that it’s functioning properly. While there are rarely, if ever, technological solutions to social problems — like the lack of rigor and professionalism with regard to testing amongst Salesforce developer– I believe it is in the best interest of not only Salesforce Developers but also Salesforce itself, to build a feature allowing administrators to engage an org-wide flag requiring all test methods to call assert methods, with sane protections against such clear abuses as System.Asert(true);

This can only result in better testing, and therefore better code in production, as well as better feedback to Salesforce about the viablity of new API versions.

You should vote for this idea here:

https://success.salesforce.com/ideaView?id=08730000000l6zHAAQ

A reusable Redirect controller for VisualFlows

The problem at hand

Visualflow is one of the most powerful tools available to Salesforce Admins and Developers. Often the biggest barrier to adoption isn’t a technnowGoFindItical issue of capabilities but the lack of realization that visualflows can do that! Unfortunately, one of the technical issues that seems to come up often (at least recently) is how to create a record in a flow, and then upon successful completion of the flow, redirect the user to the new record. The use cases are pretty broad, but I was roped into the following use case. A flow is written to guide users through creating a case. When the case is created, and the flow is finished we want to redirect the users to the newly created case’s detail page. Sounds simple right?

Good Guy VisualFlow.

Unfortunately, the finishLocation=”” attribute of the Visualforce flow tag doesn’t accept flow variables. It’s therefore impossible, at this time, to create a flow with a programmatically defined finishLocation. What you can do, however, is generate a Visualforce controller that utilizes a getter function to programmatically generate the finishLocation attribute. Rather than creating these controllers one-off as you need them, I’ve created a reusable Visualforce Controller, that you can utilize with any flow you write, to redirect to any given RecordID.

Show Me The Code.

Note well You need to create a flow named “RedirectFlow” that consists of a decision step that launches the flow you actually want to kick off. Line 4 of the visualforce page is a parameter for defining which flow you actually want to start. This “wrapper flow” bit is needed to make the controller re-usable. Big thanks to SalesforceWizard  for pointing out the mistake I made. He’s the man.


<apex:page Controller="FlowRedirectController">
<flow:interview name="RedirectFlow" interview="{!redirectTo}" finishLocation="{!finishLocation}" >
<!– This Parameter is *required!* –>
<apex:param name="StartFlow" value="YOUR_FLOW_NAME_HERE" />
<!–
Any Params you need to pass into your flow.
<apex:param name="CaseId" value="{!CaseId}"/>
–>
</flow:interview>
</apex:page>


Public class FlowRedirectController{
Public Flow.Interview.RedirectFlow redirectTo {get; set;}
public FlowRedirectController() {
Map<String, Object> forTestingPurposes = new Map<String, Object>();
forTestingPurposes.put('vFinishLocation','codefriar.wordpress.com/');
redirectTo = new Flow.Interview.RedirectFlow(forTestingPurposes);
}
Public PageReference getFinishLocation(){
String finishLocation;
if(redirectTo != null) {
finishLocation = (string) redirectTo.getVariableValue('vFinishLocation');
}
PageReference send = new PageReference('/' + finishLocation);
send.setRedirect(true);
return send;
}
}


@isTest
private class Test_FlowRedirectController{
static testmethod void SetVariablesTests() {
PageReference pageRef = Page.ExampleVFFlow;
Test.setCurrentPage(pageRef);
FlowRedirectController testCtrl = new FlowRedirectController();
System.assertEquals(testCtrl.getFinishLocation().getUrl(), '/codefriar.wordpress.com/');
}
}

Using the Streaming API for realtime, third party api notifications.

A little background.

CometD LogoRecently I was working on a Salesforce app that interacts with a third party api. In our case, users utilize Salesforce to sell complex digital products served by a remote fulfillment platform. Unfortunately, the remote API wasn’t designed with Salesforce in mind. As a result simple-sounding business processes required multiple api calls. The sheer number of calls needed made direct callouts impractical. To overcome this we built a middleware application hosted on Heroku. We intentionally architected our middleware so a single Salesforce callout could trigger the process. In response to the callout, our middleware application uses the rest API to call back into Salesforce and gather all the needed data. Then it makes API calls as needed to push that data to the client’s proprietary fulfillment platform. To ensure the Salesforce user isn’t waiting for a page to load the middleware app works Asynchronously. Unfortunately, this also complicates success and failure messaging to the Salesforce user. This is where the Streaming API comes into play. Using the streaming API we can show realtime success and error notifications from our Middleware to the Salesforce user.

Enter the Streaming API.

If you’re not familiar with it, Salesforce introduced the streaming API a few releases ago and is one of the most powerful additions to the Salesforce platform. Here’s how it works: As a developer, you establish a “Push Topic”. PushTopics take the form of a PushTopic object record. PushTopic records have a few key fields; namely:

  • Query, which holds a string representation of a Soql query
  • notifyForOperationCreate, if true insert dml calls will trigger a push event
  • notifyForOperationUpdate, if true update dml calls will trigger a push event
  • notifyForOperationDelete, if true delete dml calls will trigger a push event
  • notifyForOperationUndelete, if true undelete dml calls will trigger a push event

These fields, are all boolean fields. If set to true, any corresponding DML statement who’s data matches your query will result in the API pushing that record. For instance, if you’ve saved your push topic record with:

notifyForFieldOperationCreate=true
query='SELECT ID, Name, MailingAddress FROM Account'

Putting it all together – The middleware changes

With our Api integration example we need to make a change to our middleware to enable notifications. Likewise, inside our Salesforce app, we’ll need to do two things:

  • Establish a push topic.
  • Edit our Visualforce page to subscribe to the push topic and display the notifications.

Lets start with the middlware modifications. Our middleware application returns final results to Salesforce by creating Audit_Log__c records. As originally designed, it’s setup to create an audit log only at the end of the process. If we want to see immediate results, however, we’ll need to extend our middleware to create multiple Audit_Log__c records — one per step in the process. The key this integration then, is to ensure our Audit_Log__c records trigger our push topic. In our case the solution is to create new Salesforce audit logs records logging the results for each step of the process. Each of these records logs the action taken, whether it succeeded, and what, if any, error messages were returned.

VisualForce changes

With our middleware setup to log individual events, we can turn our attention back to Salesforce. First we need to establish a PushTopic record. The easiest way to create a PushTopic is to use the Developer console. Open up the dev console and then click on the Debug menu and choose “Open Anonymous Apex” window. This anonymous apex window allows us to execute small bits of code without having to generate a full class. Copy and Paste this code sample to your Anonymous Apex window:

PushTopic pushTopic = new PushTopic();
pushTopic.Name = 'ExternalAPINotifications';
pushTopic.Query = 'SELECT Id, Name, Action__c FROM API_Audit_Log__c';
pushTopic.ApiVersion = 30.0;
pushTopic.NotifyForOperationCreate = true;
pushTopic.NotifyForOperationUpdate = false;
pushTopic.NotifyForOperationUndelete = false;
pushTopic.NotifyForOperationDelete = false;
pushTopic.NotifyForFields = 'Referenced';
insert pushTopic;

Click execute, and your anonymous apex window should disappear. If you see a Success message in the log window, move on!

Within our Visualforce page, we have a bit more work to do. Essentially, we need to incorporate a few Javascript libraries and display the results. To do this, we’ll need to:

  • create a Static resource bundle
  • load a few javascript files on our visualforce page
  • add some markup to display
  • write a javascript callback
  • add a filter

While Salesforce handles the work of streaming the data; to display it we’ll need to subscribe to our pushTopic. To subscribe we use the cometd javascript library. Cometd is a javascript implementation of the Bayeux protocol, which the streaming API uses. Using this library, along with jQuery and a helper library for JSON we can subscribe with a single line of code.

$.cometd.subscribe('/topic/ExternalAPINotifications', function(message) {...}

But lets not get ahead of ourselves. First, lets create a static resource. Static resources are created by uploading zip files to Salesforce. For more information on creating Static resources see this helpful document. I’ve created a helpful zipfile containing all the libraries you’ll need to use the Streaming api here: https://www.dropbox.com/s/4r6hwtr3xvpyp6z/StreamingApi.resource.zip Once you’ve uploaded that static resource, open up your Visualforce page, and add these lines at the top:

<!-- Streaming API Libraries -->
<apex:includeScript value="{!URLFOR($Resource.StreamingApi, '/cometd/jquery-1.5.1.js')}"/>
<apex:includeScript value="{!URLFOR($Resource.StreamingApi, '/cometd/cometd.js')}"/>
<apex:includeScript value="{!URLFOR($Resource.StreamingApi, '/cometd/json2.js')}"/>
<apex:includeScript value="{!URLFOR($Resource.StreamingApi, '/cometd/jquery.cometd.js')}"/>

These lines tell Visualforce to include the javascript you need on your page.

The Final Countdown!

In order for the streaming API to add HTML segments to our page whenever the API fires a PushTopic, we’ll need to put a div on our page. Where is largely up to you, but I tend to try and keep my messaging at the top of the page. This is similar with how Salesforce does their own validation messaging etc. Wherever you decide to put it, put a div tag, and give it the id of “apiMessages” Something like this will do nicely:

<div id="apiMessages"></div> <!-- This Div is for use with the streaming Api. Removing this div hurts kittens. -->

Then at the bottom of your page’s markup, find the ending </apex:page> tag. Just above that tag, place a new script tag block like this:

<script type="text/javascript">
</script>

Inside this script block, we’re going to subscribe to our pushTopic and setup how our data looks when presented. To start, lets create a jQuery on document ready handler like this:

<script type="text/javascript">
  (function($){
    $(document).ready(function() {
      // Everything is Awesome Here. Here we can do stuff. Stuff that makes our bosses go "whoa!"
    });
  })(jQuery);
</script>

All this can look a bit intimidating but code inside this block will run when the browser signals that the document is ready. It’s in here that we want to initialize our Cometd connection to the Streaming API and do something with our data. The Cometd library we’re using is implemented as a callback system, so we need to write a callback function that outputs our data to the screen. But first, let’s hook up Cometd to the Streaming API.

<script type="text/javascript">
  (function($){
    $(document).ready(function() {
      $.cometd.init({ // <-- That line invokes the cometd library.
        // This next line snags the current logged in users' server instance: ie https://na5.salesforce.com and attaches the comet endpoint to it.
        url: window.location.protocol+'//'+window.location.hostname+'/cometd/24.0/',
        // Always vigilant with security, Salesforce makes us Authenticate our cometd usage. Here we set the oAuth token! Don't forget this step!
        requestHeaders: { Authorization: 'OAuth {!$Api.Session_ID}'}
      });
    });
  })(jQuery);
</script>

A couple of important notes here. The url and request headers are identical, regardless of org. Astute observers will note that we’re letting Visualforce substitute in actual API session credentials. This means that the Streaming API is following Salesforce security. If you can’t see the streamed object normally, you won’t be able to see it here.

Once we’ve setup the connection, we can establish the subscription. As before, it’s a simple one-liner addition to our code.

<script type="text/javascript">
  (function($){
    $(document).ready(function() {
      $.cometd.init({
        url: window.location.protocol+'//'+window.location.hostname+'/cometd/24.0/',
        requestHeaders: { Authorization: 'OAuth {!$Api.Session_ID}'}
      });
      // **** this is the crucial bit that changes per use case! ****
      $.cometd.subscribe('/topic/ExternalAPINotifications', function(message) {...});
    });
  })(jQuery);
</script>

The subscribe method accepts two parameters. The first is the text representation of the stream to subscribe to. It’s always to going to start with ‘/topic/’. The second is a callback function to be executed whenever data is received. In case you’re new to the Javascript or Asynchronous development community a Callback is a method executed whenever a given event occurs, or another method completes and calls it.

In our example above, we’re creating an anonymous function that accepts a single argument – message. message is a javascript object an id available to the body of our function. Within this function you can do anything that Javascript allows, from alert(); calls to appending objects to the Dom tree. Functionally, appending elements to the dom is the most practical so lets build that out. Remeber the div we created a few steps back? The one with the Id “apiMessages”? Lets put that to work.

<script type="text/javascript">
  (function($){
    $(document).ready(function() {
      $.cometd.init({
        url: window.location.protocol+'//'+window.location.hostname+'/cometd/24.0/',
        requestHeaders: { Authorization: 'OAuth {!$Api.Session_ID}'}
      });
      $.cometd.subscribe('/topic/ExternalAPINotifications', function(message) { //<-- that function(message) bit -- it starts our callback
                $('#apiMessages').append('<p>Notification: ' +
                    'Record name: ' + JSON.stringify(message.data.sobject.Name) +
                    '<br>' + 'ID: ' + JSON.stringify(message.data.sobject.Id) + 
                    '<br>' + 'Event type: ' + JSON.stringify(message.data.event.type)+
                    '<br>' + 'Created: ' + JSON.stringify(message.data.event.createdDate) + 
                    '</p>');    
                }); // <-- the } ends the call back, and the ); finishes the .subscribe method call.
    });
  })(jQuery);
</script>

Lets unpack that a bit. To start with, we’re invoking jQuery via $ to find the element with Id “apiMessages”. We’re asking jquery to append the following string to the apiMessages div for every record it receives. Thus, as records come in via the streaming api, a paragraph tag is added to the apiMessages div containing the text block “Record Name: name of record” <br> “Id: id of record” <br> … and so forth. It’s this append method that allows us to display the notifications that are streamed to the page.

Gotchas

At this point we have a functional streaming api implementation that will display every streaming record that matches our PushTopic. This can add a bunch of noise to the page as we probably only care about records related to the object we’re viewing. There are two ways to accomplish this kind of filtering. The first is to adjust our subscription. When we subscribe to the topic we can append a filter to our topic name like this:

$.cometd.subscribe('/topic/ExternalAPINotifications?Company=='Acme'', function(message) {...});

In this situation, only records matching the push topic criteria AND who’s company name is Acme would be streamed to our page. That said, you can filter on any field on the record. For more complex filtering, you can filter on the messages data itself. Because you’re writing the callback function you can always do nothing if you determine that the record you received isn’t one you wish to display.

Next steps, new ideas and other things you can do!

One thing we noticed after developing this is that we were left with a very large number of audit log records. In the future we may setup a “sweeper” to collect and condense the individual event audit logs into a singular audit log of a different record type when everything has gone smooth. We’ve also talked about include creating a Dashing Dashboard with live metrics from the fulfillment server. What ideas do you have? Leave a comment!

An Idea so good, you’ll buy yourself a beer for implementing it!

Charge it, point it, zoom it, press it,
Write it, cut it, paste it, save it,
Load it, check it, quick – rewrite it,
Plug it, play it, burn it, rip it,
Drag and drop it, zip – unzip it,
Lock it, fill it, call it, find it,
View it, code it, jam – unlock it — Daft Punk’s Technologic.

(Hair) Triggers.
If you were to ask your project manager, and a developer to define a trigger, you’d probably end up with three very different answers. Often, Triggers are a quick-fix for project mangers who know the declarative interface just won’t solve this one. Raise your hand if you’ve ever heard the phrase “just a quick trigger”? Sometimes. Sometimes, triggers are just that, a quick-fix. But if you ask a Developers, you might hear those Daft Punk lyrics chanted in monotone. “Write it, cut it, paste it, save it, Load it, check it, quick – rewrite it” Sooner, rather than later, Developers learn first hand the rabbit hole that triggers can be. After all, what *kind* of trigger is asked for? …is really needed? How will adding this trigger affect the other triggers already in place? How will existing workflow and validation rules play into the trigger? Will the trigger cause problems with future workflows?
Triggers are phenomenally powerful, but that phenomenal power comes with phenomenal (potential) complexity. Awhile back, Kevin O’Hara a Force.com MVP from LevelEleven (They make some fantastic sales gamification software for Salesforce over at: http://leveleleven.com/) posted a framework for writing triggers that I like to call Triggers.new

Triggers.new
Kevin O’hara’s framework is based on a big architectural assumptions — Namely that your trigger logic doesn’t actually belong in your trigger; instead, your trigger logic lives in a dedicated class that is invoked by your trigger. Regardless of your adoption of this framework, placing your trigger logic in a dedicated class provides valuable structure to triggers in general and makes longterm maintainability much simpler. With this assumption in mind, the framework actually changes very little about how you write actual trigger file. Here’s a generic definition of the trigger utilizing the framework.

Trigger DescriptiveTriggerNameHere on ObjectNameHere (execution,context,list){
new YourTriggerLogicClassNameHere().run();
}

Inside the logic class there are methods available to override from TriggerHandler that correspond to trigger execution states. i.e.: beforeInsert(). beforeUpdate(), beforeDelete(), afterInsert(), afterUpdate(), afterDelete(), and afterUndelete(). It’s inside these methods that your trigger logic actually resides. If, for Example, you wanted your ContactTrigger to apply some snark to your Contact’s Address your ContactTriggerLogic might look something like this:

Public Class ContactTriggerLogic Extends TriggerHandler {
Map<Id,Account> newMap;
Public ContactTriggerLogic() {
this.newMap = (Map<Id, Account>) Trigger.newMap;
}
Public Contact addSnark(Contact c){
if(!c.lastName.contains(' is awesome!')){
c.lastName = c.lastName + ' is awesome!';
}
return c;
}
Public override void beforeInsert() {
for(Contact c : newMap.values()){
addSnark(c);
}
}
//if your trigger definition included before update you could include
Public override void beforeUpdate() {
for(Contact c : newMap.values()){
addSnark(c);
}
}
}

So why do the extra work?
Not only does this framework help keep your code organized and clean, it also offers a couple of handy dandy, very nice(™) helpers along the way. As a trigger developer, you’ll sooner or later run into execution loops. An update fires your trigger, which updated related object B, which has trigger C which updates the original object … and we’re off. Kevin O’hara’s trigger framework has a built in trigger execution limit. Check it out:

Public Class ContactTriggerLogic Extends TriggerHandler {
Public ContactTriggerLogic() {
this.setMaxLoopCount(1);
}
}

That bit of code: setMaxLoopCount(1), means that the second invocation of a given method i.e.: afterUpdate() within the same execution context will throw an error. Much less code than dealing with, and checking the state of, a static variable. Say it with me now: Very nice!

Perhaps even more important than the max invocation count helper, is the builtin bypass API. The bypass api allows you to selectively deactivate triggers programmatically, within your trigger code. Say what? Yeah, it took me a second to wrap my head around it to. Imagine the scenario: you’ve got a trigger on object A, which updates object B. Object B has it’s own set of triggers, and one or more of those triggers may update object A. Traditionally, your option for dealing with this has been just what we did above, use a setMaxIterationCount(), or a static variable to stop the trigger from executing multiple times. But with the bypass api we have new option; any trigger that is built with this framework can be bypassed thusly:

public override void afterUpdate() {
this.bypass('AccountTriggerHandler'); //yeah, if you could just ignore that accountTrigger, *that'd be great*
acc.Name = 'No Trigger'; // wait! where'd acc come from? … just keeping you on your toes.
update acc; // won't invoke the AccountTriggerHandler
this.clearBypass('AccountTriggerHandler'); // actually, yeah. need you run that accountTrigger.
acc.Name = 'With Trigger';
update acc; // will invoke the AccountTriggerHandler
} //example lifted from github

What’s next?
I believe that trigger frameworks like this one provide quite a few benefits over free-form triggers both in terms of raw features but also in terms of code quality. Splitting the logic out of the trigger and into a dedicated class generally increases testability, readability and structure. But this framework is just starting. Imagine the possibilities! What if you could provide your Admin with a visualforce page to enable or disable trigger execution? Wouldn’t that make your admin giggle and offer you Starbucks? #starbucksDrivenDevelopment