I am a developer working on my first UI. My application requires the user to log into multiple accounts (this is a trading application). The user needs to enter their username/password for each account and connect to their respective API's.

Obviously they can save their username/password so they don't need to enter it every time they start the application. In this situation my app could also automatically log in based on the saved credentials.

My thoughts on this are: although this would make things 'easy', i.e the user doesn't need to do anything to connect to the API, personally I would like to manually click the 'Connect' button and see my connected icon flash up green. This would make me feel pretty good, it would give me a purpose as a user. The positive reaction from a simple action I took would please me and make me feel smart.

I suppose a real world example, for me at least, would be a car that turns on automatically once I sit down and close the door. I don't want that. I have a close relationship with my car. It makes me feel like I am in full control and connected to the car when I turn the key or press the start button.

Does this concept exist in UX. If so what is it called?

Do other characters know where the bulette is if it doesn't attack? If they can attack it, how do I handle it? Do I give it total cover? Can they even attack it when it is underground?

If they don't know where it is and they want to guess, how do I randomize that? Or should I just take the creature off of the board when it is underground and try to remember where it is?

As a third year graduate student (theoretical physics) I need to find out methods to understand if I am doing well or not, am I making progress or not, am I putting in enough effort or not?

How do I judge/measure my performance?


Of course a primary skew in my situation is that I found an advisor only towards the end of my second year of PhD. (..before that I was working in other groups on topics which I didn't like at all..)

I was building an exchange with a partners XML service, and I couldn't get the XML right, but as with all things Drupal, the xmlrpc error and action logging is less than robust.

So I did this in includes/xmlrpc.inc.

function xmlrpc_request($method, $args) {
  $xmlrpc_request = new stdClass();
  $xmlrpc_request->method = $method;
  $xmlrpc_request->args = $args;
  $xmlrpc_request->xml = <<<EOD
<?xml version="1.0"?>
<methodCall>
<methodName>{$xmlrpc_request->method}</methodName>
<params>
EOD;
  foreach ($xmlrpc_request->args as $arg) {
    $xmlrpc_request->xml .= '<param><value>';
    $v = xmlrpc_value($arg);
    $xmlrpc_request->xml .= xmlrpc_value_get_xml($v);
    $xmlrpc_request->xml .= "</value></param>\n";
  }
  $xmlrpc_request->xml .= '</params></methodCall>';

  /* This part here */
  watchdog('xmlrpc',$xmlrpc_request->xml);
  /* End ridiculously tiny hack */

  return $xmlrpc_request;
}

I got the data I needed and in 10 minutes had the partner interface responding appropriately to my request because (shocking I know) logs are good.

I like the extra logging, and I want to keep it. What is the clean, straightforward, and most importantly, Drupal-approved way of doing that?

[Originally posted this to opscode forum, got no response]

I’m testing out a free hosted chef-server account and multiple subcommands are failing with ‘Unexpected Errors’. Perhaps my version and the server version are incompatible?

OS: Ubuntu 12.04LTS

Local Chef: 10.12.0 (Installed through gem)

Local Ruby: 1.8.7

Also, the workstation machine has been manually configured, but the client(s) I’ve been experimenting with are launched with the Rackspace plugin (using ‘knife rackspace server create…’) The problem commands seem to fail when talking to the host chef-server, however, before it ever tries to modify the client, so I don’t believe that’s where the problem exists. The virtual-servers that are launched by ‘knife rackspace server create’ are launched properly but then deleting them with knife fails.

If I include a recipe in the run_list when I create the server, the recipe is properly added to the run_list. If I try to add it later or remove the one that there server was initialized with, those commands fail.

Here is the output of a few relevant commands (with stacktraces):

https://gist.github.com/7100ada3fd6690113697

Quite recently, I thought about the use of as, is and direct cast in C#.

Logically, is it a better idea to use :

 var castedValue = value as type;
 if (null != castedValue)
 {
     // Use castedValue.
 }

than :

if (value is type)
{
    var castedValue = (type)value;
    // Use castedValue.
}

But I still have issues when dealing with this kind of pattern:

if (value is Implementation1)
{
    var castedValue = (Implementation1)value;
    // Use castedValue in a first way.
}
else if (value is Implementation2)
{
    var castedValue = (Implementation2)value;
    // Use castedValue in a second way.
}
else if (value is Implementation3)
{
    var castedValue = (Implementation3)value;
    // Use castedValue in a thrid way.
}
// And so on...

How can I improve this code to prevent casting two times when OK, and is it really necessary to cast twice?

I don't want to make the code unreadable, but the idea is not to test a cast if a previous one succeeded.

I had several ideas to fix this, but none of them seems to really satisfy the conditions...

Edit:

Here is a case I've got... There is this object that is created by a lower part of teh code I do not have control on. This object can be of several inherited types. In my layer, I do want to create a specific object depending on this type. I do have this constructor class:

public static IHighLevelObject MakeHighLevelObject(LowLevelObject lowLevelObject)
{
    IHighLevelObject highLevelObject;

    if (lowLevelObject is LowLevelObject1)
    {
        highLevelObject = new HighLevelObject1((LowLevelObject1)lowLevelObject);
    }
    else if (lowLevelObject is LowLevelObject2)
    {
        highLevelObject = new HighLevelObject2((LowLevelObject2)lowLevelObject);
    }
    else if (lowLevelObject is LowLevelObject3)
    {
        highLevelObject = new HighLevelObject3((LowLevelObject3)lowLevelObject);
    }
    // And so on...

    return highLevelObject;
}

How do I solve this case ?

Database design: one database or multiple databases, which is best?

We have a database which has about a 100 or so tables, accessed by about five different applications. Five different applications have their own set of tables but also need to access about 20 master tables (used by all our systems: users, accounts, contacts, shops, etc). Now we are going to have another 15 or so applications with their own set of tables but also again need access to get information from the master tables. So before we get set up what do you think is the best schema and database set up. i.e. one database with all applications including the master ones. Each application has its own database with the master records staying in master?

Anyone's thoughts here would be much appreciated. I think I am leaning towards separate databases so they can be managed better, and performance should be better (maybe not?).

If I go with separate are their any implications: setting up references wont be possible, performance joining databases for selects, updates, asp.net needs 2 connections strings (is that even possible with say entity framework database first or LINQ DBML).

I have a very simple model. This model uses data that are not given as continuous distributions, but are described by percentiles. What is the best way to sample these percentile bins, when the bins are of unequal size?

So, for example, to select the body weight for a given individual, I pick a random number between 0-100, then match this value to the nearest percentile. I don't interpolate or extrapolate, I just match the value I draw to the nearest bin. (Extrapolating isn't a good idea given the data.) Let's say, for body weight, the percentiles I have are 25, 50 and 75. But this gives bin sizes of 37.5 (0-37.5), 25 (37.5-62.5), and 37.5 (62.5-100). So because of the unequal bin sizes, I'm going to be sampling both the 25% and 75% bins much more than I'll be sampling the median, 50%, bin. This is the opposite of what I'd like to happen.

I could weight the bins, but that seems arbitrary. Or, instead of drawing my random number from a uniform distribution 0-100, I could draw it from a normal distribution centered at the median, but that also seems arbitrary. Or, alternatively, I'd love to be convinced that I don't actually have a problem here.

Any ideas on how I could better set this up? Thanks!

I'm trying to regulate a single AA battery to 5V for charging a USB device.

I know even at perfect efficiency, it won't provide much juice (1.5v * 1500 / 5v). But what is the expected efficiency for stepping up to 5v from 1.5v? Is 80pc achievable?

Are there other issues one should worry about in designing application?

How long does it take for a cited paper to show up online? I can see a couple of papers that were cited very recently and the papers containing those citations are currently online since a week or two. How long should I expect it to take before those citations show up as citations through Google Scholar or other indexing services?