Should we allow software applications equivalent citizenship rights with robots as resources that can be requested and allocated?
Two compelling use cases:
Would run inside a concert service, and once allocated is completely owned and controlled by that service. e.g. a typical ros style service would roslaunch its own mini-system of software nodes. None of these nodes would interoperate with other concert services.
Lan Services (~Web Services)
Software that you really want to share across multiple services on the concert, or you want to avoid running multiple independent instances inside multiple concert services. e.g. mongo database, map saver, map store, world transform tree.
Requests - first have to define what is a resource...
Starting Rapps - who would be responsible for installation/startup machinery of software robots?
Sharing Resources - important if we want to define resources as lan service style resources.
- Don't worry for now, worry about shared implementations and tackle this head on when we really investigate the later rocon milestone dedicated to intelligently scheduling software.
- Meeting everyone agreed it is desirable.
(Jack) Should be done with an awareness of cpu cycles, i.e. farming jobs.
(Daniel) Already milestone 10 for rocon's OPP (in first half of 2014).
(Daniel) Most of the time we can easily fire up software in our service for private use so not much problem here. Real problem is for software that needs to be shared across services.
- If we really want to, we can get the scheduler to fire up an app manager frontend if it's a *.*.pc.* resource, but don't have a use case urgently.
Pre December 3
(Daniel) These could very easily represent themselves just like robots with rapps, e.g. consider it as a pc robot with a list of rapps it can run, specified by a tuple such as linux.*.ros.pc.world_transform_tree.
(Daniel) We originally started with the notion of 'everything is a robot' implying that we applied the same definition of client for robots, software and interactive human clients (tablets w/ humans). This makes processes simpler. We eventually separately classified interactive human clients though since their processes were almost without overlap when compared to robot clients. Should we separately classify software clients as well?
(Daniel) Software resources have the following attributes that robot/device clients don't have - on the fly (doesn't have to be pre-invited) and distributable (think computation farm).
(Jack) I like to think in terms of the underlying hardware resource. Access to a compute farm is sharable, but rather than swamp it to the point where nothing gets accomplished, we could provide appropriate, adjustable limits on the number of concurrent jobs allowed. More than that would need to wait their turn. Maybe the simplest way to do that is to advertise a pool of rapp instances that are available for concurrent use.
(Daniel) Great, you're going down where we were already thinking (milestone 10). Surely there must be existing software to help us do that as well. For now, we are just working with the assumption that there are sufficient computational resources available on the concert pc.
- In the light of the above (concert pc farm), this gets back to the question - is scheduling these jobs within the scope of this scheduler? Should we do it inside the scheduler temporary till milestone 10 is reached?
(Jack) I think so. Initially, each of these jobs should just advertise a small, fixed number of rapp instances (like five).
(Daniel) Sharing will be tricky.