Tuesday, January 28, 2014

Surprisingly High View Count on Photo Posting Spreadsheet

Okay, it has been about 36 hours since I posted the previous Google+ photo posting statistics spreadsheet so I thought I'd go and take a look at how many views the spreadsheet posting has received.

Well, it turns out it has been viewed 11,679  times!  This raises many interesting questions about the inner workings of Google+.  Here is a screenshot of a snapshot of the view statistics (click on the image for full 1:1 resolution):


By comparison, one of my more popular photo postings receives only about 1,000 views over longer periods.  Since a large fraction of my followers probably added me because of my landscape photography postings, this makes the spreadsheet views statistic quite surprising.  I wish I had access to the attributes of the people who viewed this posting, ideally broken down by company and photography fan vs. techie.

Monday, January 27, 2014

Google+ Photo Add Statistics

Google+ Photo Add Statistics

(This is just a quick, possibly to-be-continued posting...)

I added a photo to my profile and several Google+ Communities last night at 8:13 pm.  The attached image shows a screenshot of the associated statistics after a 12-hour overnight period.  I haven't yet decided how I want to visualize this information...  I'm certainly too busy to spend additional time on this exercise this week:)

(Click on photo to see 1:1 resolution)

Monday, January 13, 2014

A Method for More Intelligent Touch Event Processing - Most Likely Widget

A Method for More Intelligent Touch Event Processing

Link to slide deck: http://goo.gl/y4Mx4G

Link to Java source code Main.java: https://goo.gl/T9iwhL   NOTE: Lines end with just \n not \r\n

The above PDF slide deck summarizes ideas for reducing the frequency of accidentally invoking unintended UI widgets on touch devices.

The Java unit test draws an overlay semi-transparent graphic which visualizes which widget is activated for each pixel in a mockup e-mail app.

Summary

• Desktop pointing devices (mice) have precise, single-pixel accuracy - touch devices do not


• Depending on device attributes, touch users are lucky to achieve an accuracy of 10-30+ pixels


• This causes many occurrences of: User intends to activate widget A but inadvertently activates nearby widget B


• The reason this problem exists is because touch device and OS OEMs assume that the legacy desktop single-pixel precision model will work well on touch devices - this is a poor assumption


• My recent experiment suggests that the frequency of inadvertent widget activations (event-to-unintended-widget mapping) can be improved


• The above URL slide deck summarizes a project I did over this past weekend to demonstrate that, for one simple UI at least, an algorithm for mapping touchevent (x,y) points to widgets which considers touchpoint-to-widget centroid distances as well as which widget's bounding rectangle contains the touchpoint (x,y) can provide the user with a parameterizable/tunable margin of error border around widgets which has the potential to substantially reduce the activation of unintended widgets


• I might add that inadvertently activating an unintended widget can be dangerous if the unintended widget were to, for example, open a malicious URL or e-mail


• More work is needed to evaluate and refine the proposed method in a variety of UI contexts, but I believe the presented algorithm has merit


So, extremely weary of using touch device UIs which frequently activate the wrong widget, I spent some time this weekend developing and validating (on one simple UI) the above simple algorithm which offers imho a better approach to map touch event (x,y) coordinates to UI widgets.

Below is a screenshot from this mini project I did over the weekend showing a semi-transparent overlay encoding, via color, to which widget a touch event at each pixel would be mapped were the current naive Widget.rect.contains( Point pt ) logic to be replaced with a simple algorithm based on touch point distance to widget centroids + which widget's bounding rectangle contains the touch point (x,y).

Original Google+ post on this topic

Below is a screenshot from the unit test touch-widget event mapping code:




As can be seen in the above screenshot, touch points anywhere within the semi-transparent red circle for the top checkbox would be routed to that checkbox.  This provides users with a margin of error which should reduce the frequency of having the wrong/unintended widget receive the touch event.  Note that the left and right overlays pertain to two different centroids for the checkboxes: Left: centroid is for checkbox alone, Right: centroid is for table cell which contains the checkbox widget.

Comments are welcome, either below or via email: 2to32minus1@gmail.com



Copyright © 2013-2014 Richard Creamer - All Rights Reserved

Tuesday, July 23, 2013

Idea for using Google Glass for Autism Research Study




Thought I 'd just mention a quick idea...

I got an idea yesterday which may be a useful application of Google Glass.

In a nutshell, my idea is to have a research group/population of young parents wear Google Glass when interacting with their very young children.

The camera and microphone in the glasses, in conjunction with specialized software, could statistically characterize and analyze/track infant eye movement and sound/voice/speech patterns over time.

Should a rapid degradation in, say, the degree of infant eye contact, be detected over a 24-48 hour period, a notification would promptly be generated to the parents and researchers along with supportive data.

In such a situation, the parents' memory of what specific factors the child was exposed to in the past days/week would be quite complete and accurate while the resulting suspect causal factors would be narrowed down exponentially thus giving researchers a much shorter list of possible Autism causal factors to contemplate.

The above assumes that children can sometimes transition from normal to ASD over a very short period (days). Many parents have stated they feel this is the case, however, this may not be the case.

Here is the URL to the Google+ posting I made earlier today where I describe the idea in more detail:



https://plus.google.com/109632926185753183937/posts/EJZvHvyh13h










That's a wrap...

Thursday, May 23, 2013

Distributed Large-Scale Object-Oriented Parallel Processing Framework

In 2007 I developed a prototype implementation of a large-scale computing framework capable of solving problems similar to Map-Reduce, as well as many perhaps more general computing problems and services.  (Feel free to contact me for more real-world examples of the computing tasks this framework was designed to support.)

The design for this framework evolved informally in my head from 2002-2007 until I actually got around to creating a default implementation in late 2007.

Why I'm sharing this work

Last year, I completed the +Coursera Machine Learning course taught by +Andrew Ng and also I am currently taking the Introduction to Data Science course taught by +Bill Howe of UW.  These two courses gave me my first exposure to Map-Reduce/Hadoop.

Since the distributed parallel computing framework I describe in this posting is similar to and perhaps more flexible in some ways than Map-Reduce, I thought I'd share this old/prior work from my startup of the time.

Quick Introduction

This link (http://goo.gl/55xYj) has a few more details along with a few unit test results including examples of how wildcarding could be exploited, but here is a high-level overview, mostly about how the novel message bus functioned:

• I designed and created a novel message bus similar to JMS.

Every message was an RPC request.

• Each message's recipient(s) could be addressed via multiple fields:

Namespace: Any unique string, but usually a hierarchical path (wildcards supported)

Class name: The leaf class name, or any ancestor class or implemented java interface (wildcards supported)

Uuid: A globally-unique identifier (wildcards supported)

Method name: The method name to be invoked by the recipient

• Each RPC request could be unicast or broadcast (one-to-many) by using wildcarding.  (For example, the uuid could be set to '*' which would result in all instances of a specific class in the specified namespace (which could be '*' as well) receiving and invoking the RPC request in parallel.  Similarly, the hierarchical namespace could have wildcards in its hierarchical path.)

• The potentially very large network of message servers were loosely-coupled and dynamically configured into small, local groups/clusters for routing/switching.

• Messages were sent using the HTTP protocol, and platform-specific header fields were added to each HTTP message header which were used:

1) By  message servers to route the messages

2) By the message recipients to invoke their RPC header-fields-addressed method

3) To specify where to send the results of the computation

• While most RPC requests were asynchronous, synchronous RPC requests were also supported.

• All message servers routed their messages via dynamically-configurable DecisionTree routing objects which could access the HTTP header fields to compute next-hop message routing.  So, the routing methodology was quite flexible.  The last example in the unit test output file dumps out one example DecisionTree object.

Example Unit Test Output

Here is an excerpt from the unit test output (note uuid is wildcarded):

cmd selected: MsgServer1,2.getUuid()

RPC Request:

ns = /autogeny/sys
cn = net.autogeny.sockproto.MsgServerImpl
uuid = *
mn = getUuid

Responses:
MsgServer1
MsgServer2

(ns=namespace, cn=classname, mn=method name)

-----------------------------

Okay, that's a quick wrap...  Any comments/discussion are welcome, both private and public.  Thanks for reading:)

Wednesday, August 1, 2012

Ignored Web and UX Problems

Continuing Problems of the Web

This is a quick and brief summary of some of the problems I’ve observed on the Web.  (I don’t have time to list others…)  I don’t know if anyone or any company has even recognized these as problems, nor if any research/work is underway to solve these problems. 

Linear Discussions
  •  Common Pattern: 
    • A common Web pattern is: 
      • Blog or social network posting, followed by comments/discussion
  • Problems arise when:
    • The comment count exceeds about 20 postings
    • Comments are too verbose
    • Comments branch into off-topic areas 
  •  Observations: 
    • Frequently, high-comment-count discussions suffer from:
      • Overly verbose responses which no one has the time to read 
      •  Uninformed comment entries because comment authors do not take the time to read the other comments
      • Variations on, or repetition of, the same sentiment
  • Ideas:
    • Summarization:
      • Do  not allow traditional comments.
      • Instead, allow commentors to do the following:
        • Nominate brief sentiment statements on posting topic (less than 10 words).  Common examples:
          • “That’s cool!”
          • “I agree.”
          • “Well, have you thought about X?”
        • Vote on nominated sentiment statements as being on- or off-topic
        • Agree with or disagree with approved sentiments (single click, not words).  (Display vote counts next to sentiments.)
        • Nominate a brief sentiment as the basis for a disagreement vote on an existing sentiment
        • Optional recursion (tree or graph structure)
        • Offer nominators of sentiments voted as off-topic the option of creating distinct discussion.
    • Group into distinct lists, accessed via single buttons:
      • Branched topics
      • Sentiments voted as off-topic
    • Use semantics to further distill sentiments into three-part structures (RDF-like)
    • Use something like a Tree widget in combination with other UI widgets to support more effective user navigation of a discussion and its sentiments, etc.
Incoming Stream Postings Buried
  • Many people do not have time to monitor their social network(s) incoming stream feeds frequently.
  • As a result, when they do get the time to check in, many interesting posts are pushed down so far in their queue that users never see these postings.
  • There are many problems behind this problem, including:
    • Stream Posting Prioritization 
      • Many people post too frequently, and have a low "interesting post" density.
      • To the best of my knowledge, no social networking platforms currently have Machine Learning incorporated into their platform which allows users to privately rate/rank their interest level in postings of specific posters and topics/keywords.  
      • Thus, social networking platforms have no basis on which to group and prioritize incoming streams into a more structured, non-linear feed.  
      • (Although, they could make an attempt to track how much time users spend reading/viewing various posts.)
    • No Topic Trees
      • No social networking platform that I am aware of offers users the ability to list the topic areas in which they post, nor the topic areas in which they are interested.
    • Multiple Personas
      • Sometimes users follow people because, for example, they might be a technical community manager and periodically post on interesting, professional/technical subjects.
      • Unfortunately, these people publish personal postings as well under the same user ID, and we start seeing postings about their cats, or restaurant visits.
      • It would be helpful if social networking platforms allowed users to define multiple personas so that users could post under a specific persona so that users could follow only the persona(s) of interest.
      • This problem area could also be improved via Topic Trees.
 Verbosity
  • Busy people simply do not have the time to read lengthy postings.
  • Sadly, these same people are often the ones who might offer the most insightful comments.
  • Posters should take the time to be concise and maximize the clarity of their postings.
Real Estate
  • From mobile devices to desktops, many web pages simply waste too much screen real estate (pixels).
  • Form and Function
    • Often, website or app designers pursue “form” without an associated “functional” benefit.
      • Example: Lists of posting summaries with too many lines of text per posting, or wasting too much space on unnecessarily-large “pretty” posting images.
 Mobile Smartphone Ergonomics and UX
  • A few problems for single-handed usage:
    • Placement of UI elements (buttons) either too far (or too close) to a user’s thumb making it uncomfortable or impossible to reach these elements.
    • No left/right handedness setting
      • In their Settings menus, phone UI frameworks should support a user’s ability to specify that they are left- or right-handed.
      • The position of UI screen elements should then adapt based on this information so that all users can comfortably reach these UI elements.
  • Lazy image load causes discontinuous scrolling
    • When scrolling through posting lists, users expect scrolling to be smooth.
    • Lazy loading of graphics/images which are sometimes included in posting “header” summaries in posting lists can cause the scrolling user experience to be “jumpy.”
  • Poorly-tuned UX scrolling momentum/velocity can cause problems:
    • If too slow (high “coefficient of friction”), then too many fling gestures are required.
    • If too fast, (low “coefficient of friction”), then users cannot easily control list navigation. 
  • Scrolling Directionality Constraints
    • Users should be able to smoothly scroll documents, images, and other content in any direction.
    • Sadly, in many cases only vertical and horizontal scrolling are allowed, and even then, the horizontal scrolling can be very cumbersome.
 Semantics-Based Surveys
  • I, for one, have never taken a survey which allowed me to precisely express my sentiments regarding the purpose/subject of the survey.  There are two reasons for this:
    • Survey authors are unable to assemble the “right” questions.
    • There is no support for semantics-based “statements about statement.”
  • What is needed is a lightweight semantics-based framework which makes it easy for survey authors to enumerate the “actor” concepts/elements in the query space for which they are seeking user input - ideally, without the author even knowing that semantics are being used under the hood.

Sunday, February 19, 2012

The need for provenance and other metadata for Web content

Here is a public Google+ URL to an interesting discussion regarding provenance and other metadata for Web content in which I entered a few thoughts: http://goo.gl/7W2oh (I hope that this link works, both now and in the future.)