I'm happy to report that I've been given company approval to port the relevant components of our Flex data binding library back to Eclipse Data Binding.
I haven't started the actual port yet--there are still some concepts on the Flex side that are not a perfect match to Java and existing idioms in Eclipse Data Binding. You'll see what I mean.
To avoid conflating the port to Java with the general API I'm going to just present what the Flex API looks like.
Bind.from(source, "foo")
.to(target, "bar");
This binding watches the source.foo property, and writes the new value to target.bar each time a change it detected. Now add some validation and conversion magic:
Bind.from(source, "foo")
.validate(Validators.stringToNumber)
.convert(Converters.stringToNumber)
.validate(Validators.greaterEqual(0))
.validate(Validators.lessThan(10))
.to(target, "bar");
Here we've added several additional steps in the pipeline.
- After source.foo changes, we first validate that the string can be converted to a number. If so the pipeline continues to the next step, and terminates otherwise.
- Next we convert the string to a number
- Now validate that the number is greater than or equal to zero. If so the pipeline continues to the next step, and terminates otherwise.
- Now validate that the number is less than 10. If so the pipeline continues and the number, now verified to be in the range [0,10), is written to target.bar.
Now suppose our binding is misbehaving somehow, and we want to troubleshoot. We can add logging steps to the pipeline in between the other steps so we can see exactly what is going on:
Bind.from(source, "foo")
.log(LogEventLeven.INFO, "source.foo == {0}")
.log(LogEventLeven.INFO, "validate {0} is a number")
.validate(Validators.stringToNumber)
.log(LogEventLeven.INFO, "convert {0} to a number")
.convert(Converters.stringToNumber)
.log(LogEventLeven.INFO, "validate {0} >= 0")
.validate(Validators.greaterEqual(0))
.log(LogEventLeven.INFO, "validate {0} <>
.validate(Validators.lessThan(10))
.log(LogEventLeven.INFO, "set target.bar = {0}")
.to(target, "bar");
(In Flex, string formatting is done with {n} format instead of the %s syntax which Java inherited from C. The log statement passes the values in the pipeline as additional arguments which you can reference in log statements.)
These log steps are a real lifesaver for tracking down and squashing bugs in your binding code.
If you've already worked with Eclipse Data Binding you may have noticed something else: you are no longer constrained to the standard data-binding pipeline. You are free to add steps in the pipeline wherever you like and in any order you like.
Next up is two-way bindings. The bind class provides a twoWay method which connects two bindings to the other one's starting point:
Bind.twoWay(
Bind.from(source, "foo"),
Bind.from(target, "bar") );
is equivalent to:
var lock:Lock = new Lock();
Bind.from(source, "foo")
.lock(lock)
.to(target, "bar");
Bind.from(target, "bar")
.lock(lock)
.to(target, "foo");
Notice that each binding has a "lock" step in the pipeline. Only one binding can hold a lock at a time. This solves the common infinite loop problem:
- source.foo changes. binding one executes, writing the value to target.bar
- target.bar changes. binding two executes, writing the value to source.foo
- source.foo changes. binding one executes, writing the value to target.bar
- ...
- stack overflow!
Since only one binding can hold the lock at a time, this is what happens instead:
- source.foo changes. binding one acquires the lock and executes, writing the value to target.bar
- target.bar changes. binding two attempts to acquire the lock but it is already acquired. binding two aborts.
- binding one releases the lock
You should never add the same lock more than once to a single binding, since that would guarantee that the binding will never run.
Two-way bindings can use validations, conversions, logging, locks etc just like regular one-way bindings (since two-way bindings are just two one-way bindings wired up to eachother):
Bind.twoWay(
Bind.from(person, "birthDate")
.convert(Converters.dateToString(dateFormat))
Bind.from(heightText, "text")
.validate(Validators.stringToDate(dateFormat))
.convert(Converters.stringToDate(dateFormat))
.validate(Validators.lessEqual(now))
);
We usually leave out the validations in the model-to-UI bindings. It's usually only important to apply validations when you're copying data back from the UI to the model, to make sure domain constraints are satisfied, such as ensuring that a birth date occurred in the past.
And now for my favorite part: binding from multiple sources, to multiple destinations. Raise your hand if you have ever had to wire up a UI form like this:
Is there a foo? (o) Yes ( ) No <-- fooRadioGroup
Enter bar: ____________________ <-- barText
Requirements:
- fooRadioGroup.selectedItem is bound to model.foo (a boolean)
- barText.text is bound to model.bar (a string)
- barText must be enabled iff fooRadioGroup selection is Yes.
- When the user clicks "No," set model.bar to null but do not clear the text box. If the user clicks "Yes" again, set model.bar back to the contents of barText
Requirements 1 and 3 are easy:
var fooLock:Lock = new Lock();
Bind.twoWay(
Bind.from(model, "foo"),
Bind.from(fooRadioGroup, "selectedItem"),
fooLock); // explicitly provide the lock, see more below
Bind.from(fooRadioGroup, "selectedItem")
.to(barText, "enabled");
Requirements 2 and 4 are kind of related to eachother. The model-to-UI binding is simple enough: just write the value straight across:
var barLock:Lock = new Lock();
Bind.from(model, "bar")
.lock(barLock)
.to(barText, "text");
However the inverse binding (UI-to-model) must also take fooRadioGroup.selectedItem into account to decide whether to write back barText.text (if Yes is selected) or null (if No is selected).
The Bind class has another trick up its sleeve:
Bind.fromAll(
Bind.from(fooRadioGroup, "selectedItem")
.lock(fooLock),
Bind.from(barText, "text")
)
.lock(barLock)
.convert(function(foo:Boolean, bar:String):String {
return foo ? bar : null;
})
.to(model, "bar");
Look closely. The binding pipelines that we pass to fromAll(...) become the arguments, in the order they are provided, to the converter and validator functions further down the pipeline. The first pipeline is from fooRadioGroup.selectedItem and therefore that boolean value is the first argument to the converter. Likewise, the barText.text pipeline is provided second, so that string value becomes the second argument to the converter.
The converter takes multiple values but returns only a single value. This is where those values get coalesced into a single value that we can write to the model--in this case, a String value or null.
The outer pipeline adds a locking step on barLock, which is expected since we need to prevent infinite loops between the last two pipelines. However we are also locking on fooLock, on the first of the inner pipelines. We had a problem with our bindings overwriting values in the UI depending on the order things were initialized.
It turned out that without that lock, if a new model object was set, then the foo binding would fire first. Thus model.foo was copied to fooRadioGroup.selectedItem. But that would trigger our last binding to execute, so if the new foo value was false, then the last binding would override anything in the text box and set null on the model.bar field, before the model.bar => barText.text binding had a chance to execute!
A good rule of thumb is that any time you need to bind from multiple sources, you should make sure to create a lock to share between all the bindings to relate to the same field in the model.
Obviously there are several concepts that will have to be adapted to work elegantly with our existing APIs. Realms are a missing piece (Flex is single-threaded so we didn't even have to consider it). Also we would want to try to retrofit the existing binding classes to use this new API transparently, like we did with the transition from custom observables to custom properties.
So there you have it. This is my current vision of what Eclipse Data Binding should evolve toward.
Comments?
5 comments:
I like the interfaces you've outlined here (I think, the examples get a bit tricky to follow :-). However, I'm not sure the best path is one that treats 100% backwards compatibility as an absolute requirement. I know the platform team generally disagrees with my perspective, but I think this is a good time to clean up and avoid some mess and complexity in both API implementation by introducing a new API here. Deprecate the old one or leave it behind altogether or whatever, but trying to wedge this into the existing APIs seems like a nightmare.
Perhaps e4 is the right opportunity to make that jump.
@Eric:
Yeah, after I published this post, I looked over it and thought I may have gone into too much detail all at once.
Going forward I'll try to explore just a few concepts at a time, and probably include some explanatory diagrams.
As far as remaining backward compatible, given that the current binding API supports a well-known, limited set of binding pipeline steps:
* get value from source
* validate after get
* convert value
* validate after convert
* validate before set
* set value to target
These would translate easily to the new API:
Bind.from(source)
.validate(afterGetValidator)
.convert(converter)
.validate(afterConvertValidator)
.validate(beforeSetValidator)
.to(target);
The only tricky parts I can see are:
* make sure the realm behavior doesn't change, and
* enforcing the update policy e.g. UpdateValueStrategy.POLICY_UPDATE (the default) vs POLICY_CONVERT (up to validate after convert and then stop, except run the full pipeline when explicitly called) vs POLICY_NEVER (easy enough, this just means a one-way binding).
(continued)
I can see your point on starting with a clean slate again, but I don't think it's necessarily an either-or in this case. If we decide to deprecate and retire old API we can always move it to a compatibility plugin.
When we do eventually decide to make backward breaking changes in version 2.0, there are a few things I would change:
* Realms. The cross-thread use case was compelling to the original DataBinding team, but somehow the design has gotten away from us to the point that we have to worry about realms everywhere. This should have been encapsulated into one place, in the form of an observable decorator.
* Make constructors protected, and recommend clients to get observables through factory methods. This gives us more freedom to evolve the API and to transparently return different classes depending on the context.
* Generics! It's time to turn the page.
* Update validators and converters to accept varargs.
* Change IObservableValue.getValueType() to return Class instead of Object. I'm just sure what the original justification was for this (so EMF observables could return EClass?) but it made a bunch of things just a tad too abstract and therefore not useful. Actually I think even the EMF observables just return the java.lang.Class.
Eric: After careful thought I think you're right that this may be a good time to turn the page and start fresh.
I've been gathering notes for the past several months and will post them soon on this blog for public comment.
Post a Comment